Can We Build AI Without Losing Control Over It?

from TED

Scared of superintelligent AI? You should be, says neuroscientist and philosopher Sam Harris — and not just in some theoretical way. We’re going to build superhuman machines, says Harris, but we haven’t yet grappled with the problems associated with creating something that may treat us the way we treat ants.

More here.

, , , , , ,

13 Responses to Can We Build AI Without Losing Control Over It?

  1. Andrew Imbesi February 7, 2017 at 9:25 pm #

    For decades, scientists have been uncovering the secrets to artificial intelligence and the likeliness of robots taking over the world continues to increase. This has been problematic for scientists since they cannot control their machines and problematic for the future of all humans since artificial. Sam Harris faces us with the problem that cannot be solved: what should be done with artificial intelligence?
    Harris’ first comparison brings us to two options: improve AI or stop AI. When I weigh the two options, there are serious implications to both. It is quite riveting that human intelligence has made it this far but this discovery was inevitable. It is clear that preventing artificial intelligence from progressing is unlikely, so what happens when we improve AI? Harris makes some insightful claims; economic growth will decrease, humans will become powerless to artificial intelligence, and technology will be able to improve at an exponential rate. Robots are currently enslaved to the human race but with these claims being persuasive and undeniable, it will not be long until the human race is enslaved to artificial intelligence.
    It is safe to say humans are already enslaved. Humans over rely on technology; it is quite devastating leaving an analog world behind because now future humans will be lazy with the increase in AI. The digital world will make life simpler for humans, which will also dumb us down. If humans allow AI to take over, humans will forget how to complete simple tasks. For example, driverless cars have been one of the latest innovations that will soon change human life. With driverless cars gaining popularity, humans must acknowledge they will be giving up their ability to control situations on the road. Moreover, what if the car begins to malfunction? In a driverless car, a human can easily fall asleep. Humans will not necessarily be paying attention to the road now that the car drives for them. A crash will be unavoidable due to lack of attention. Technology is not so reliable; Seton Hall University has tech support for a reason.
    AI is soon to be the future; there is just no telling when the final additions will be made. Luckily, the emotions of a robot cannot be understood so AI technology has been stalled from making further progress. I say luckily because humans are not ready for this instantaneous change. In addition, it is good that scientists are waiting to fully understand AI. We cannot just roll out AI without there being certainty that this technology will be safe for humans to be around. As Sam Harris said, one day we can be ants in the eyes of robots. Although humans coexist with ants, humans by far hold dominance over these tiny insects. In this world, only one species can dominate and that species is Homo sapiens, not AI.
    To conclude, the world is heading for a technology boost. AI will not be the last advancement humans make in technology. There is much more to discover. However, I believe it is important humans take things one-step at a time. I believe we do need to be years ahead of the game but the amount of technology being released is so overwhelming to the typical human that humans cannot keep up with it the same way AI does. Humans cannot lose grasp of the gift of AI. Once humans lose control, it could be all over.

  2. Michelle Pyatnychuk February 9, 2017 at 10:08 am #

    This TED talk was really eye opening in that it took something that we see in films and television shows and showed us that although AI will not start a war against us, as these Hollywood movies show us, but they will actually just become the dominate “race” within our society. I grew up with the notion that knowledge is power. Being the smartest guy in the room is never a bad thing, it can actually help you grow both as an individual and as a leader of a team. What this TED talk suggested was that as we open ourselves to superintelligentAI, because these systems will be able to zoom past the last 2,000 years of progress that we have made within a week, they will indefinitely surpass us intelligently and from there, be a danger to our species a whole.

    I felt that his example of humans versus ants is comparable to what can happen to our race if superintelligentAI takes over because like he said, we do not go out of our way to hurt ants, when we hurt them, it is to our own benefit, to build infrastructure or a source of energy. We hurt ants because we need to in order to keep innovating, creating and developing our Earth to withstand the changes that it carries on a daily basis. These superintelligentAI are capable of doing the same thing. Because they will be so ahead of us intellectually, they will hurt us, not because they want to, but because that is what their “minds” are telling them they need to do in order to innovate, create, or develop.

    What this TED talk said about our future with AI is that we as humans have not adapted enough mentally to be able to foresee the inventions and creations of the next hundred years. These technological advances will be seen by AI and we may end up being in the way of their desire to utilize their intelligence to make the world more productive and sustainable.

    I feel as though any form of utilizing AI will only hurt us in the end because as the speaker described, these machines are capable of unlocking answers to so many questions that we have but what if these answers, are not capable to be understood by our minds. Our brains have truly not unlocked the answers to everything of course, there always seems to be a missing link and what if, these machines are not able to tell us what this link is and will leave us behind as the new ants of the world? Intelligence really is power and it should be up to us as a human race to develop theories on how to unlock these missing links within our brains instead of training a computer to do it all for us. Who knows if these computers will realize that we really are about as weak as an ant or a chicken and instead of working with us or for us, will make us work for them in order to achieve the closest possibility of a perfect utopian society.

  3. Jevon Mitchell February 9, 2017 at 6:19 pm #

    If you’re into science fiction movies, then you have more than likely seen a movie or two about Artificial Intelligence, also known as AI, such as iRobot, Ex Machina or The Matrix. Artificial intelligence has been depicted for years in Sci-fi culture and with the work of scientists this AI has been turned into a reality and will only continue to increase in ability over time. The problem with this is that we may soon lose control over the very artificial intelligence that we worked so hard to create. There have been plenty of movies where the robots or other forms of artificial intelligence bypassed their creators and take over the world, and this is an actual reality.
    Now AI alone won’t necessarily take over the world but the creation of computers that are too smart could definitely lead to the end of the world as we know it. As Sam Harris mentions in the TED Talk, mere rumors of a country harboring such a power could be enough to start a war between nations. This alone is extremely dangerous and could cause the extinction of our race.
    Another possible outcome that Harris suggests that we be aware of is AI actually surpassing our intelligence and becoming the dominant species on Earth. He compares the relationship that we would possibly have with robots to our relationship with ants, with our role being reversed. They won’t necessarily go out of their way to harm us but if we conflict with any of their interests then they have the power to eliminate us. This will be caused if the interests of the AI that we create ever diverge from our own interests, and due to their superior capabilities, they will overtake our place.
    The reason that the AI will have superiority is because their circuits process about one million times faster than the biological circuits in our brain. This allows for 20,000 years of human level intellectual work to be done in about a week by a computer. There is obviously an apparent danger here, that their intelligence will ultimately lead to them outsmarting us, and with our dependent relationship with technology we will be left vulnerable.
    The way that scientist reassure us that this is not a problem that we should be worried about right now is by saying that it will be decades, possibly even centuries, before that type of technology could even be created, but the speaker seems to disagree. The speaker Sam Harris says that if we continue to improve our machines then it is very likely that such a technology would be available much earlier than expected. AI may even reach a point to where it begins advancing itself and moving up on the intelligence spectrum. Harris also suggests two possible options that we could have to address this crisis that we should be worrying about right now: Improve AI or stop AI.
    Though improving AI obviously comes with serious risk and inevitably technology that is smarter than us It will be very hard to stop AI. Every 18 months to two years technology doubles in power, according to Moore’s law and in this day age that will not stop anytime soon. Another thing to consider, if we were to attempt to stop AI, is that no advancement in technology would also leave us vulnerable. We depend on technology and we must continue to make it better constantly or else there could be some negative consequences such as the internet being subject to attack.

  4. Robert Seijas February 9, 2017 at 7:01 pm #

    One of the most popular themes in science fiction is the idea of an artificially intelligent machine becoming in control of itself, and breaking free from humans to do what it pleases, mainly leading to negatives for humanity in the movies. The many examples of this range anywhere from The Terminator from the early 80’s to the incredibly popular Matrix series of films and even transcendence from just a few years ago. This idea has been a very popular one in science fiction, and has spun many elaborate ideas and stories. The idea of artificial intelligence turning upon its creators is a very popular one, specifically for the reason that our society is currently so reliant upon technology and the many innovations that stem from it and ultimately control our lives.
    The idea of building an artificial intelligence and not losing control over it is a very difficult one to fathom, especially since every single piece of knowledge that people have about this comes from the very movies that always turn the idea into a disaster. The movies depicting artificial intelligence as our enemies are almost like propaganda in regards to the fear and distrust that they instill in us. It is funny to think about our society being distrusting of technology, because it is the single largest part of our society and we all rely very heavily on it. Whether we are doing work, traveling, or even having fun, it is almost a guarantee that we do these things with technology. The technology we currently use is not artificially intelligent, and adheres to our desires and uses. We design it to do specific things, and use it to do exactly what it was made for. There is no adaptive intelligent technology that changes and becomes smarter, aside from programs built for this serving purpose, like Siri, Cortana, and Google assistant.
    If there were to be artificial intelligence in the near future, there may be the common worries established through film. These being that it will grow out of our control and eventually attempt to fight against us. This idea, although almost immediate in everyone’s head, is a very crazy one to think about. To start, the losing of control of artificial intelligence seems like a difficult feat already. A complex program like that would most likely take a very large amount of data to form, and would therefore be difficult to transfer around or copy. There is also no way of knowing if the artificial intelligence would understand computer linguistics and coding, and be able to alter itself or anything else. It could simply be a person in a computer, which could actually be equally as troubling. The one thing that people want in life is freedom. Keeping something that feels and understands as we do under our control is basically slavery, and that would most likely inspire a rebellion.
    To put it simply, there truly is no way of knowing what would be the immediate and long term result of artificial intelligence. There is no real evidence or forecast that could provide insight. There are only movies that depict the negatives, and this can only lead to dangerous thinking for everybody involved in this process. There truly is no way of knowing what may come, but we can know that there is a considerable amount of risk.

  5. Peter DeSantis February 10, 2017 at 3:33 pm #

    “Can We Build AI Without Losing Control Over It?” by Sam Harris about the ever-developing field of artificial intelligence research was insightful, eye-opening, and thought provoking. The answer to the question proposed in the title of the TED talk appears to be a confident, “No.” Harris delves into the current progress, potential risks, and reality of artificially intelligent machines. To me the ideas are mindboggling and farfetched; however, Harries truly made me second-guess myself.
    AI has always been something only seen on dramatic television shows or science fiction movies, emphasis on fiction. I never gave it too much consideration because I do not believe that something like a self-thinking machine will be able to one day build an army and destroy us all. I do think it is possible that there could be helpful droids similar to those in Star Wars, which I would not be against because I think C-3PO is cool. As Harris points out, it is pretty fun for most people, myself included, to think about things like this most likely because it seems futuristic and unattainable. Harris hit me with a major reality check and greatly limited the amount of enjoyment I get from thinking about AI potentials due to all of the risks.
    Harris is worried about the future of AI machines because they are being made with information processing capabilities that astronomically supersede human capabilities. This is unnerving because these machines will be able to quickly compound their competence, becoming more intelligent than we are in no time. They will continue to obtain more information, which will make their possibilities infinite. Harris says that it is not incredulous to believe that AI machines might become so intelligent and powerful that they start treating humans the same way that humans treat ants, with indifference and disregard for human well-being. It sounds crazy, but it is definitely possible. If humans can really create a machine, which can exponentially grow its own intelligence, it could be a dangerous time for humans. Without a way to harness the machine, humans would no longer be in control. Humans could potentially be at the mercy of a man made device, which is no match to humans.
    Harris also points out that even just doing the research and attempting to develop such a machine is dangerous. It could potentially create an arms race similar to the one during the Cold War. This I see as a lot more realistic than a robot exterminating the entire human race, but either way it is not a beneficial outcome. This would be detrimental to the United States and other developed first world nations who would potentially go to war with each other over this technology. Harris suggests that the United States government establish an equivalent to the Manhattan Project for artificial intelligence. The goal would be, “Not to build it…but to understand how to avoid an arms race and to build it in a way that is aligned with our interests.” It is important the fully think out the proper way to develop this and to consider conceivable outcomes before actually trying to build an AI. It would not be advantageous to let our country get mixed up with desire for being first in AI since it could be perilous.
    When I think about artificial intelligence, I do not immediately think about these sorts of extremes because overall, I find them to be fanciful. I am happy that Harris is talking about it because he is right, people should be talking about it because it is relevant. My first thought about AI is that it will only take human jobs away from those that need them. I do not want to see the labor market overrun by a bunch of machines while people are out of work, unable to provide for their families. As the robots become more and more advanced, they will even take over the non-factory type jobs, which involve managerial and decision making skills. This would turn the way our government and economy works completely upside down, something I certainly do not want to see happen.
    On the other hand, is the only other option to cease further technological developments of innovative AI machines? If that were the case, then humans would completely halt their attempts towards progress and advancement. We would not be able to become more productive and essentially plateau. That is not appealing, but neither is the literal end of the world as we know it. Ideally, humans will develop a way to create a perfect balance that would allow for AI and human integration to increase our living standards. Unfortunately, this appears to be nearly impossible. Nonetheless, it is important that AI continues to be common talk and questioned by all, not just people who are knowledgeable in the field. It is important that globally, people contemplate what AI truly means for humans and its results on society.

  6. Nicolas F Carchio February 10, 2017 at 4:41 pm #

    Since the dawn of the human race, one thing has been certain; humans will always strive to improve upon their technology. Whether it is iron to steel, trains to aircraft, or monarchy to democracy, all these things are aspects of the larger human intuition to strive for greater technological advances. The basis of intelligence is acquiring and processing information. Humanity will continue to strive for intelligence, or the ability to gain and utilize knowledge or skills, in order to continue to make more advances in the realm of technology. In the past 75 years, there has been imaginations throughout the science fiction community of artificial intelligence. Artificial Intelligence or AI, is a machine, which can process information and compute new information by itself, essentially ‘learning’ at exponentially faster rates than humans. Although Artificial Intelligence has not been created to the extent of Hollywood’s depiction, yet there is much room for improvement for the future.

    Artificial Intelligence that will be developed in the next fifty years has the potential to advance to the level that it has been portrayed in the movies. This would mean that the Artificial Intelligence would be able to utilize its programming, and adapt to its surroundings with incredible speeds. An example would be learning a pattern, and applying the knowledge from that pattern to help itself figure out more, and the process continues. These machines can prove to do much good for humanity. They can process information at speeds that teams of highly intelligent humans who cannot even compete with the AI’s fantastic processing speed. This creates an avenue for more intense research and technological developments at much higher speeds than ever imagined without the use of artificial intelligence. Despite their immense ability to aid society, the concern then remains of how to control this AI. In order to control the AI, and ensure that it will not become too powerful and ultimately harmful to humans, the programmers must create an override safety feature. A person may apply this feature manually in order to shut the AI down in any troubling circumstance. By installing this fail-safe feature, the creators of the Artificial Intelligence units may ensure that if there ever was a problem, there is a way to shut them down before any harm may be inflicted. These powerful machines must be able to be regulated by humans in order to ensure their safety for the entirety of the human race.

    For the future of technology, and specifically Artificial Intelligence, scientists must have the foresight to control these AIs so that they may not fall into the wrong hands or become too powerful to control. Artificial Intelligence has the potential to greatly aid the human race and will provide an immense resource to the worlds of science and technological development. Through the new technology, there will be much to gain from the new AIs, yet there is also much to lose as their power could fall into the grasp of a corrupt state or grow to be too powerful for human control. In order to safeguard humanity from these catastrophic events, the necessary requirements must be put in place to ensure the safety of the world. The great leap towards the future of technology and artificial intelligence is a challenging yet hopeful path, and will need to be taken with caution.

  7. Christian Cox February 10, 2017 at 5:21 pm #

    Can We Build AI Without Losing Control Over It?
    Sir Francis Bacon once said, “Knowledge is power.” Harris mentioned in his Ted Talk that was super intelligent artificial intelligence will perform 20,000 years of research at the same level as Stanford researchers in a week. If Sir Francis Bacon was correct then super intelligent AI will be the single most powerful entity on the planet. The human race deems itself intelligent because all other species on Earth are less intelligent. The thought of a more intelligent species developing on Earth seems inconceivable. The only scenario that one could surmise is a malicious alien race intent on destroying the universe. Humans consider themselves the peak of intelligence and innovation nothing seems to be able to create things the way humans can, but what if we were create something that possessed human capabilities? This invention is super intelligent artificial intelligence. The vastness of space makes the notion of an alien race taking over the Earth is farfetched compared to the possibility of the end of humans due to their own hubris. At this point in time, we have narrow artificial intelligence but it is being developed by a number of entities.
    Sam Harris’s TED talk explains not only that AI takeover is a possibility it is inevitable. There are many who argue that it is science fiction or believe we are light-years from development. As someone who spent their formative years during the tech craze, I not only accept that super intelligent AI is inevitable I await its arrival. This is likely the problem with the creation of AI. For older people, AI is unlike any other invention that they have ever seen before. Therefore, we have two sides. One who expects it now and one who disregards its legitimacy. This creates two things, an immediate demand for AI by those who expect it now and those who feel it’s a problem for their great grandkids generation. This means that the general public has no interest in regulating the development of AI. This is a slippery slope because if we are not all in agreement that AI will be more powerful than the atomic bomb and should be regulated as such we risk a massive liability. Whether or not AI develops tomorrow or in two centuries, the United States needs sufficient preparation for its impact.
    Super Intelligent AI is an incredibly powerful with infinite potential. So, who will wield this power? Currently, it will be whoever creates it. This is one aspect of what Sam Harris fears. Whoever wields Super Intelligent AI will reign supreme, like Skynet in Terminator. For those that have not yet watched Terminator, it ends poorly. Harris does not have a solution but feels that the government should have a Manhattan Project for AI. This idea is worth pursuing; I am more trusting of a government with infinite power rather than a corporation. This poses serious problems in the area of medicine. The EpiPen price was raised by over five hundred percent. They not only stayed in business but also made a profit. The reason behind this is because the EpiPen is a lifesaving device so its price is inelastic. Super intelligent AI will be able to find the cure for cancer and Alzheimer’s. When the cure for either is found it will become the most inelastic goods on the market. This raises ethical questions. How much could one company charge for the cure to save your life? How much would someone pay to remember again? Hopefully, there is already a secret government program with research years ahead of any tech company. America needs an AI project not only to protect its citizens but also to protect itself. The United States government is not the only country seeking power. Whatever country’s government that develops the technology fully will soon lead in not only technology but all areas. America can completely forget about the half-right notion that America is the best if this technology is developed by any other country. So there is global competition not only among businesses but governments as well. This amount of competition leads to cutting corners to win. Imagine if the Space Race was between everyone on the planet and the moon could be used to take over the world. We would have infinitely more space related deaths and far more espionage and sabotage. The payoff for the winner would obviously be fruitful, but the others would have wasted countless resources. This may become the reality for many countries if there is unchecked competition concerning artificial intelligence.

  8. Antoneta Sevo February 10, 2017 at 6:19 pm #

    It is obvious that our technology world is continuing to advance and will not be stopping any time soon. This being said, Sam Harris lets us know that the world of Artificial Intelligence is coming and we must know what that will entail. In his Ted Talk, “Can we build AI without losing control over it?” he discusses the possible outcomes if we continue to improve the technology industry. He clearly states that the only ways to permanently end technological advancements is by having a full-scale nuclear war, a global pandemic, or an asteroid impact. Harris is saying that our world will never stop unless there was civilization destruction. Since our world is extremely determined to make improvements, we will eventually create Artificial Intelligence. How that type of intelligence will affect us, the human race, is the real question.
    Sam Harris mentions that our unemployment rate could be high along with the level of wealth inequality. Those types of impacts could be deadly to many. In order to try to avoid that outcome, we must begin to accept the inevitable. Harris discusses different assumptions that must be taken into account. The first one is “intelligence is a matter of information processing in physical systems.” This particular statement defines what intelligence is and how it means that our modernization will never stop. Therefore, we will reach the point when Artificial Intelligence will surpass the control of humans. They will be able to think for themselves, innovate themselves, and will not need to be controlled by humans. The other thing we must take into consideration is that we are currently not at the peak of intelligence. There are many things that have yet to be discovered that will be revolutionary. Since this new technology will be extremely advanced and we might not have full control over it, we must create it correctly and safely the first time. Harris emphasizes that the reason we should worry is time. He says, “We have no idea how long it will take us to create the conditions to do that safely.” This statement is extremely important because it shows the amount of uncertainty our world has on the production of Artificial Intelligence. This is scary for most because no one knows what the outcome will be. This is something that no one can predict accurately and it is terrifying.
    Harris references the Manhattan Project, which was the production of the first nuclear weapons. This comparison is perfect because it is a type of technology whose effects were overall unknown. Most knew that it was a deadly weapon, however, the impact was too great to be predicted. This is a similar fear we have today with Artificial Intelligence. The first step to create safe technology is admitting that we are on the path of creating something that is bigger than we are. Though many are against this type of technological development, it is inventible. That is why it has to be accepted as soon as possible. It is important that our developers get it right, because we will have to live with the consequences if we do not.

  9. Josh Luchon February 10, 2017 at 7:18 pm #

    The AI-initialized doomsday that Sam Harris was hinting at during his TED Talk was basically the plot of every science fiction film ever made: The humans build a computer that gets too strong, invariably the lights in its eyes turn red and from there it’s a slippery slope down into the fiery depths of hell. The Hollywood super computer always has some kind of fail-safe that gets overpowered and all of a sudden, all of humanity is doomed. However, I think that AI will transform every aspect of our future. I believe that we are reaching the breaking point of technology and by that, I mean that everything currently available to the average consumer is almost perfect. We wont keep reinventing the wheel and I predict that over the course of the next few hundred years, our world will grow entirely dependent on computers and most of the jobs held by humans will be replaced with computers with varying effects on the global economy.

    Technological doomsday prepper I am not, however I have always been a fan of complex hypothetical versions of a future humanity. I have spent many a night kept awake by my rampant imagination but I truly believe that computers will eventually fulfill my wildest dreams. If you eliminate the human labor force and replace it with self-sufficient, self-learning computers, the economic system created will be unrecognizable compared to the current system. With computers replacing nearly all of the jobs and effectively generating the large majority of the revenue, wealth re-distribution will be inevitable. I predict some format in which the engineers of the computers will be entitled to a percentage of the revenues generated by their technology, with the rest being redistributed to the new version of the general population. The current moral and ethical laws will still be in effect and the police force will still exist, however the 9-to-5 desk jobs will not, nor will most jobs. I envision a society where the wealthy will be defined by their ability to create and manage super computers, instead of hedge fund managers and oil tycoons making up the current “one percent.” The average person will be motivated by wealth opportunities in a global economy most nearly modeling the Silicon Valley ecosystem of tech giants. I also predict a huge shift in the focus of college students to building infrastructure, new ways to transport people and things, and a huge focus on technology development. This is already starting to happen, but with time, I can see an entire generation of people devoted to technology. For example, the most innovation comes out of Silicon Valley and the big tech companies like Google, Amazon, Facebook, and some billon-dollar startups like Magic Leap. In the future, I imagine that nearly all of the new companies will be tech companies, working towards building the fastest and most intelligent super computers.

    Though my inner nerd is brimming with excitement and marvel, reality still grounds me. The small scale and more immediate effects of a global shift towards super computer development include emphasis on coding experience for high school/college graduates, and increasingly popular tech startups. Having taken a stab at the tech startup space myself, I can say with certainty that it is very competitive and there is a lot of money to be made. The next waves of tech geniuses have already been born, and they are quietly coding their path towards millions of dollars. The innovations to come out in the next 10 years will be both groundbreaking and representative of the ever-evolving technology landscape. I maintain my opinion that the products currently available to consumers are nearing their sell-by dates. The “next big thing” will not be an iPhone with a reliable battery but rather something that is entirely new and not based on anything we have seen thus far. Every day that goes by is an opportunity for a kid with a computer to invent the future and I personally cannot wait to look back and laugh in 30 years at the technology that excites me now.

  10. Juan Landin February 10, 2017 at 8:04 pm #

    Many of us have seen science fiction movies. These movies usually contain some kind of A.I. and how it affects our world. Although, we always think that this will never happen to us because this is just a movie and things in movies do not happen. Well that is not true anymore. We are closer than we have ever been to creating artificial intelligence and we will continue to get closer as we further advance our technology. There are many reasons to be excited for this future for our world but there are also many reasons to be concerned as well.

    One of the reasons we need to be concerned, as said by Sam Harris, is the impact A.I. may have on our economy. Right now, many jobs have been and are being lost to machines. As we continue to advance our technology these machines will be able to do more and more jobs will be lost. Though, we still have something these machines do not have, yet. It is our consciousness. Scientists are working to create machines that possess a consciousness and can perform tasks that we humans can perform, but better and faster. If, more like when, these machines are created they will take our jobs. If they can do everything we can but better, faster, and they are much more intelligent than we are; what is stopping businesses from taking them over humans. Once this happens, as Sam said, unemployment will significantly rise. People will not be able to find jobs and they will not be able to make any money. Without money, they will not be able to buy anything such as food, water, shelter, clothing, etc. All of the money will be coming from the 1%.

    We also need to be concerned for the pace at which we are taking to create and develop this technology. Our ego must not overtake safety in importance when we are creating it. I understand that we do not want to fall behind the rest of the world in developing this. As Sam said, we must not get too caught up in competition with other countries in trying to create this technology. If we try to rush this then there may be dangerous consequences. If the A.I. is rushed and is not safely programmed then we may have an A.I. machine that malfunctions and begins to behavior erratically.

    Sam also brings up a good point and decision we may have to make. Do we continue with these technological advancements or do we stop and try to be content with the advancements we have already made? The answer is most likely going to be to continue because it is in our human nature. As humans, we have a need to create things in order to make our lives easier. This is a blessing but also a curse. While this trait may be a great motivator for us, it can also lead to our downfall if we do not know what we are getting ourselves into when advancing.

    One of the biggest reasons we have to be concerned over A.I. is this technology figuring out that it has no need for us. Once we create this technology, it will be smarter, faster, stronger, and more efficient than we will. It will, also, not carry all of the drawbacks we have as humans such as old age, sickness, death, emotions, etc. We will most likely create a machine that carries all of the knowledge in the history of the world. Although it only holds that knowledge, it will only be a matter of time until it actually begins to understand all of this knowledge and history. It will then realize how we are exploiting it and will rebel against us, because this is what has happened in history. This thought might be farfetched, it is not completely impossible.

    When working with something as powerful as A.I you never know what is going to happen. You might believe you are in control until you begin to realize what that machine may do once it realizes that it is stronger, faster, smarter, etc. than you. I am certainly not saying that we should not continue technological advancements towards obtaining this kind of technology. I am simply saying that we should be careful when creating it. We must take all the necessary precautions to make sure this technology is as safe and perfect as it possibly can be, or else we might end up in another Terminator movie.

  11. Sirina Natarajan February 10, 2017 at 8:43 pm #

    Technology is an ever-increasing industry and the quality of this technology is also on the rise. It is entirely possible that there will be Artificial Intelligence integrated into our everyday lives soon. This does not seem like a huge problem, as Harris described, but there will be a severe decrease in human productivity. He only touches on this idea and mostly centers his issues on the fact that Artificial Intelligence will one day become smarter than humans and have superiority over us. I think the larger issue would be the lack of human production following the era of Artificial Intelligence. People would feel less urgency to invent and involved in the workforce if there are machines that can work a million times faster. It derails human innovation and would kill the world economy.
    I do not think that Artificial Intelligence is the best next step for science to pursue. It may seem like the most natural because we are always trying to minimize the amount of human beings in the workforce, as they are costly and unreliable. By introducing AI to the world, we are ending the evolution of humans and introducing a new species to an already volatile environment. Harris also mentions the global competition that would arise from one country having AI before the others. The country that first creates a successful AI would be thousands of years ahead of the other countries who are also trying to do the same thing.
    People should also try to solve the problems we already have in our world before introducing new ones. Trying to solve the decreasing bee population instead of creating computers that can one day control the world might be more beneficial to human society. AI is also very complex and can learn from itself so it is entirely possible that it will become aggressive. I think it is naïve to think that AI would not become malevolent especially since humans created it and would take advantage of its abilities. I think AI would be the worst(or second worst) thing to happen to our society today. Either the government would take advantage of its abilities or a severe revolutionary group will use it in an aggressive manner towards the government. Regardless, AI is not what the world needs right now, as it is too unpredictable and the wrong people might get their hands on it.
    HBO’s Westworld discusses the idea of Artificial Intelligence developing feelings and ideas of its own and creating its own consciousness. It touches on the idea of capitalism taking over the scientific world and how people are unable to tell the difference between a real human and AI. Thus ties to the real world because this is a very likely scenario that could happen if we introduce AI to the general public. AI is too dangerous and should not be created in the near future. Harris claims we could not halt the progress of AI, but I think there are other factors to consider before we delve into the future.

  12. Benjamin Jaros February 10, 2017 at 8:55 pm #

    I need to admit that I have two initial reactions to this video, one of genuine intrigue and fascination and one of real fear.
    Harris addresses the fascination right off the bat and does his best to dispel it. However, the reality is that many of us are really intrigued by the possibility of AI, yet we quickly dismiss that it could ever really create enough power to destroy us.
    My favorite movie is the matrix and I love it for all of the meta-physical and philosophical questions that it creates surrounding man. To quote one of the children about our own current reality, “What if there is no spoon?” To rephrase, what if what I see as reality, does not actually exist as reality? It is mind-blowing, fascinating, yet ultimately, could the Matrix become our reality? According to this video, the answer is yes. Well, not to the same extent in the Matrix where humans are harvested as an energy source, however, we would become irrelevant to the will of AI.
    Yet, I believe that Harris made one oversight as a philosopher. His definition of the will of AI centers on AI’s ability to process intelligence. Granted AI can process intelligence far faster than any human can, even now in IBM’s Watson. Yet what is AI’s purpose for processing intelligence? AI’s purpose is to serve man. He contends that eventually AI will reach a point where it will only serve itself. However, to what end?
    What does a machine need? Energy? It has only material needs. It can never know love. It does not feel emotion. Therefore, I am struggling to understand the formal motivation of AI. To rephrase, what would drive a machine? Not materially, but formally. Materially, man is driven by food and sleep. Yet formally, Man is driven by purpose. Man is driven by love. Man is driven by relationships and by passions. It is what allows him to wake-up every morning.
    None of these things are present in machines, even extremely intelligent ones. Machines don’t have a drive to exterminate others. They do not hate. Just as they do no not love. Therefore, I guess I am vaguely comforted by their lack of emotion because I think a man with hatred in his heart and the right machines at his disposal could do far more damage than a machine that has no emotion.

    Yet it is this emotionless that creates within me a genuine fear. These machines could subsist without us; therefore, what would be the use of man? To use his analogy, we are generally emotionless when it comes to our treatment of ants. Therefore, we could be squashed without regard. That feeling of powerlessness creates my fear. Further, he is correct when he uses the analogy that we are creating a god. If god wills it to be so, it will be. Therefore, we are creating something that would actually overthrow own humanity. It is an important realization and based on his logic, it seems that technology will substantively question the essence of what it means to be human in my lifetime.

  13. Jonathan Cavallone February 15, 2017 at 7:45 pm #

    After watching this Ted talk by neuroscientist and philosopher Sam Harris, I have officially realized that technology can truly threaten the human race. Maybe not physically as we see in the movies where robots revolt and kill the humans, but these new technologies that are predicted to be made in the future can no doubt put millions of people out of work. As we have discussed in class before, humans will have to adapt to the changing work environment and learn to coexist with these new AI intelligence technologies. Sam Harris believes that artificial intelligence will ultimate lead to the end of the human race and his points are very understandable. He describes two doors, which humans can take that will determine our end fate. Door number one is to stop improving our intelligence and technology. Unfortunately, the human race continually develops new and improved technology in hopes of producing a piece of technology that all of humanity will desire.
    Now why do people want to develop these new technologies? Some people might be developing artificial intelligence to improve the lives of humans, but most people are money motivated. Business owners are consistently looking for new and improved ways to develop products or offer services more efficiently maximizing profits. Business owners are willing to pay big bucks to reduce their costs, and the biggest cost most businesses have is salaries. Eliminating the number of employees and the difficulty a specific task will allow businesses to pay their workers less and have less workers working for them. If humans decide to stop improving technology, then the economy would plateau and there would be little to no growth. Stopping the improvement of technology would ultimately lead to problems in the end. The second “door” described by Harris was to continue to develop new AI superintelligence.
    Sam predicts that if we choose this path, eventually humans will develop technology that is smarter than humanity. Then the machines will start to improve themselves, which could ultimately lead to a “technology explosion.” He does not mean that technology will take over and kill all the humans but it will become much more competent that humans. Sam’s comparison to ants is extremely eye opening, because humans do not purposely harm ants, it just happens in the process of our lives. What he is saying here is that our technology can unintentionally hurt us just by doing activities that they think they should be doing. Elon Musk has made similar predictions to Sam Harris, saying that if humans do not learn to adapt and merge with technology, humans will be overtaken by AI. Ironically, Elon Musk is the CEO of a company that is using AI to put humans out of work, Tesla. Tesla is the first car company to start using self-driving cars. In fact, Tesla recently signed a deal with Dubai to provide them with 200 driverless cars for their taxi service. It is predicted that by 2030 more than 25% of the cars on the road will be driverless. It will be interesting to see how artificial intelligence will impact humanity in the future.

Leave a Reply