I was speaking to a disabled friend of mine a few days ago who gave me the idea for this article and said that I was welcome to write it. I want to cover AI a lot in my media endeavors here mostly because I use it every day to improve various aspects of my life, including those things that I struggle with because of my disability. I think the degree to which AI can assist people with disabilities in everyday tasks is the story of the AI revolution that is yet to be told, and it is only fair that is a disabled technologist and recent computer programmer. I write my own chapter in that story. So this is the first article in that series.
Most of us with smart phones or even computers have used AI to do something of greater or lesser significance to us. The things I do most frequently with AI myself are some types of preliminary research where I’m asking it to summarize product reviews so that I can decide where to do further research. I always ask the AI to provide sources for its information so that I can understand where its inferences are coming from. I.e. where it draws its information from so that I can understand the source of the data it uses to make its decisions. Let’s be honest though, the language of AI decision-making is inherently lazy and imprecise because AI doesn’t make decisions. It simply applies a program to data based on a weighted algorithm. It does its best to either summarize data from the Internet or generate its own response. Based on that data to answer the question that we ask it, but computers are not capable of making decisions. Humans do the decision-making.
When any human besides to make the basis of their decision, either in whole or in part on how an AI interprets data they have made the decision to trust the AI’s analysis rather than to spend the time and effort to evaluate the data themselves, either alone or in a group with other humans. The subject of how AI’s process data is fascinating to me. I am a psychologist, but modeling the human brain is not my area of expertise. I am a clinician and work with people with disabilities who need help every day and over the last 15 years. It is been my pleasure to provide that assistance to the best of my ability. Although I will include future articles here that are designed to elucidate some aspect of cognitive science. I’m not going to comment on them here because I need to research the matter first so that I can explain in plain English, to the best of my ability how well or not. AI’s model certain aspects of the human brain.
One thing I do know for sure though, if you base a data-driven decision on a bad set of data, it will lead to a bad decision more often than not a popular way of saying this is “garbage in garbage out” data with a profound source of error is bad data and will likely lead to bad decisions. There are protocols both mathematical and human, that can be used to attempt to filter out the source of error. But these protocols are to their own degree in perfect so as a general rule it is always better to start out with less error in your original data source when performing analyses that can later lead to eventual decisions.
Psychological science has had a series of very careful research methods for attempting to remove human bias which are very lengthy and beyond the scope of this article. Most of those methods are applied in the course of psychological research either in a careful analysis of the data or in the design of the study itself. Bias is a source of error which is why psychologists are very interested in studying it, and often very interested in removing it.I was speaking to a disabled friend of mine a few days ago who gave me the idea for this article and said that I was welcome to write it. I want to cover AI a lot in my media endeavors here mostly because I use it every day to improve various aspects of my life, including those things that I struggle with because of my disability. I think the degree to which AI can assist people with disabilities in everyday tasks is the story of the AI revolution that is yet to be told, and it is only fair that is a disabled technologist and recent computer programmer. I write my own chapter in that story. So this is the first article in that series.
Most of us with smart phones or even computers have used AI to do something of greater or lesser significance to us. The things I do most frequently with AI myself are some types of preliminary research where I’m asking it to summarize product reviews so that I can decide where to do further research. I always ask the AI to provide sources for its information so that I can understand where its inferences are coming from. I.e. where it draws its information from so that I can understand the source of the data it uses to make its decisions. Let’s be honest though, the language of AI decision-making is inherently lazy and imprecise because AI doesn’t make decisions. It simply applies a program to data based on a weighted algorithm. It does its best to either summarize data from the Internet or generate its own response. Based on that data to answer the question that we ask it, but computers are not capable of making decisions. Humans do the decision-making.
When any human besides to make the basis of their decision, either in whole or in part on how an AI interprets data they have made the decision to trust the AI’s analysis rather than to spend the time and effort to evaluate the data themselves, either alone or in a group with other humans. The subject of how AI’s process data is fascinating to me. I am a psychologist, but modeling the human brain is not my area of expertise. I am a clinician and work with people with disabilities who need help every day and over the last 15 years. It is been my pleasure to provide that assistance to the best of my ability. Although I will include future articles here that are designed to elucidate some aspect of cognitive science. I’m not going to comment on them here because I need to research the matter first so that I can explain in plain English, to the best of my ability how well or not. AI’s model certain aspects of the human brain.
One thing I do know for sure though, if you base a data-driven decision on a bad set of data, it will lead to a bad decision more often than not a popular way of saying this is “garbage in garbage out” data with a profound source of error is bad data and will likely lead to bad decisions. There are protocols both mathematical and human, that can be used to attempt to filter out the source of error. But these protocols are to their own degree in perfect so as a general rule it is always better to start out with less error in your original data source when performing analyses that can later lead to eventual decisions.
Psychological science has had a series of very careful research methods for attempting to remove human bias which are very lengthy and beyond the scope of this article. Most of those methods are applied in the course of psychological research either in a careful analysis of the data or in the design of the study itself. Bias is a source of error which is why psychologists are very interested in studying it, and often very interested in removing it. The Harvard business review has written about the topic of bias in large data sets and how numbers do not speak for themselves. I will include that reference in the endnotes this article for those interested. Data produced by humans will have human biases and it and careful action ought to be taken to attempt to eliminate as many sources of error as possible in data including but not limited to bias.

What is AI good at. Then, if it is incapable of decision-making, at least in its present form. It is good at summarizing data and finding a proverbial needle in a haystack to assist humans. It appears to be able to follow the structured syntax of programming and produce functional computer code. In some instances, but oftentimes in my experience that code will have errors in it. Which means that it needs to be evaluated by a human and perhaps adjusted to the specific task at hand. It is very good at proofreading The output of speech recognition which for the record is been using neural nets, a concept which I will cover later in which comes from artificial intelligence to adapt itself to the human voice for a number of years now. Neural nets are part of why Dragon NaturallySpeaking Siri and Amazon Alexa have become more accurate over the years that they’ve been on the market. I have been using speech recognition almost every day since the early 1980s. And have watched with great joy and interest as the technology has drastically improved. One of my disabilities is cerebral palsy and the other one is dyslexia, which means the improvement in voice recognition technology has been essential in allowing myself to express my thoughts in the written word more easily. Prior to the significant advancements in voice recognition technology that I’m using today. I had to dictate my papers to a human in order to use a computer because I lack the dexterity, to type. It is indeed a wonder of modern technology that I can type this blog post for you to all read and then use AI to correct the mistakes that the speech AI. After all, that’s all that speech recognition is, made in transcribing my voice. Hopefully then less of the mistakes will get to you, my readers. Popular narrative would have it that many students are inclined to use AI to cheat. By having the AI write a paper for them. For example. When I asked the AI to correct the mistakes of the speech recognition AI. I requested that it not add any additional content to the article, other than correcting my grammar. Part of the reason that AI is more effective in correcting grammar then the spellcheck that is built into your favorite word processor is that it has a lot more data to work with as modern-day eyes, which are based on written material. Typically, were trained on a large portion of the Internet up to a specific date.

One final major bias of AI is the depiction of images, since it is trained on the artwork they did have access to. If that artwork is visually biased in the depiction of a specific group than the AI’s output will be biased in the same way unless somehow rigorously corrected before that bias affects the output of the large data set. For example, when both myself and my friend asked various AI’s to depict a person in a wheelchair. The AI gave us output from probably a 30 or 40-year-old wheelchair which is not reflected in any way the mobility technology that I or my friend use on a daily basis. Thank God. If this data set will be used to depict our lives in the future, and there should be a rigorous and documented course of action for us to take when those depictions are not accurate. I’m not such a course of action exists, the bias will be prepared perpetuated and the inaccuracies will continue, which will lead to further prejudice and misunderstanding from those who have not had direct experience in the matter. It is the humans must make the change in the computer code and/or the data to correct this. Asking an AI to correct itself without new input is very likely to be a futile endeavor. Much like asking a human to crack themselves after 40 years of making the same mistake, without for example therapy to work on the subject, would also be very likely a futile endeavor. We can make changes to ourselves and we can make changes to AI, but both take work and it turns out both are important.

Trending