Fei-Fei Li’s Quest To Make AI Better For Humanity

from Wired

SOMETIME AROUND 1 am on a warm night last June, Fei-Fei Li was sitting in her pajamas in a Washington, DC, hotel room, practicing a speech she would give in a few hours. Before going to bed, Li cut a full paragraph from her notes to be sure she could reach her most important points in the short time allotted. When she woke up, the 5’3″ expert in artificial intelligence put on boots and a black and navy knit dress, a departure from her frequent uniform of a T-shirt and jeans. Then she took an Uber to the Rayburn House Office Building, just south of the US Capitol.

BEFORE ENTERING THE chambers of the US House Committee on Science, Space, and Technology, she lifted her phone to snap a photo of the oversize wooden doors. (“As a scientist, I feel special about the committee,” she said.) Then she stepped inside the cavernous room and walked to the witness table.

The hearing that morning, titled “Artificial Intelligence—With Great Power Comes Great Responsibility,” included Timothy Persons, chief scientist of the Government Accountability Office, and Greg Brockman, cofounder and chief technology officer of the nonprofit OpenAI. But only Li, the sole woman at the table, could lay claim to a groundbreaking accomplishment in the field of AI. As the researcher who built ImageNet, a database that helps computers recognize images, she’s one of a tiny group of scientists—a group perhaps small enough to fit around a kitchen table—who are responsible for AI’s recent remarkable advances.

More here.

,

2 Responses to Fei-Fei Li’s Quest To Make AI Better For Humanity

  1. Richard Gudino January 25, 2019 at 5:16 pm #

    While I read through the article I was expecting to read about the doom and the dangers of AI, about how one day if we aren’t careful AI could be more of a detriment than it is helpful. However, that was not the article presented before me, instead it was about the growing field of AI and a Professor who is spear heading a change not only for artificial intelligence, but diversify those who write the code for AI. The article raises a good point that Fei-Fei Lei, the professor whom the article is about, makes saying that AI is “It’s inspired by people, it’s created by people, and—most importantly—it impacts people.” This made me think that AI is almost lie an art, it just serves to be an extension of the human condition. Just like art mirrors reality so will our AI, so there is no need to over dramatize the dangers of AI when we as human beings should be conscious enough to avoid putting all parts of the human condition into a program and machine that could easily replace people in the workforce. From this point spawned another idea I had not thought about, if AI serves as an extension of human beings that means that AI could have biases as well. With the field of AI being dominated by white males Lei makes the observation that we need more diversity in these labs In that way our AI doesn’t have “biased algorithms”. The article cites incidences where AI was making errors and at times racist remarks or racist errors, for instance like labeling a black mans picture, like that of a gorilla. Its also these same programs that can eliminate women and minorities out of jobs with programs being able to go through applications and see who is qualified amongst the competition.
    There are also bigger implications as we continue to develop tech at an alarming rate. For instance a legal question that can be raised is lets say that the AI makes an error that would deny someone on the basis of color. That’s a lawsuit and a potential hate crime that cloud plague the company using the AI. The question would be who is responsible and who will cover the damages? Who faces the suit? Could we be able to hold the AI responsible for having a level of bias, which would indicate that it could be sentient. This itself would bring on a whole set of ethical question with AI that will be addressed soon because of the rapid growth of technology. We already have in place the Turin test to see if something is machine or man. These lines may be crossed later on and this article helps to push for the ethics of AI as it develops.

  2. Nicholas Meyerback January 29, 2019 at 9:48 pm #

    Fei-Fei Li is the director of Stanford’s Human-Centered AI institute as well as the Stanford Vision and Learning Lab. Li is a pioneer in image based AI technology. As the founder of ImageNet, Li has made her mark as a trailblazer in Image recognition using databases of millions of pictures to “teach” AI how to analyze images. But what really makes Li different, besides being an Asian-American woman in a field dominated by men, is that she truly understands both schools of thought surrounding AI.

    Nearly everyone has heard of the magic of AI: how AI can perform tasks that conventional computers could never do, how AI can see and think (quicker than humans) and how AI can be applied to physical occupations. Anyone who has heard of AI has also heard the doomsday scenarios. Scenes from The Terminator come to mind with worlds taken over by and run by robots that marginalize human beings. The other unavoidable prospect is that AI is replacing humans in almost every job. In-fact last week I wrote about how AI will do just that, stating that humans must adapt to this new competition by gaining new skills. This fear is almost as strong as the nativism facing Americans that think Mexicans are “stealing” their jobs.

    But what people fail to understand is the benefits that come with AI. AI is not here to destroy society. It is here to increase efficiency, minimize mistakes and save lives. One of the things that AI is adept at is recognizing human mistakes. This is crucial when human error leads to loss of life in healthcare, law enforcement and on the roads. If AI can help a doctor locate organs and exactly where to make an incision then is it really detrimental. We already use software to tell ambulance drivers the fastest route in an emergency based on data of landscape, roads and current traffic.

    Li recognizes the good and the bad that encompass AI and how to maximize the good. AI is built by humans. AI will only cause destruction if its human inventors program it to destroy or intend for it to destroy. Li makes it clear that people are the safeguards of our society and that AI will only have negative outcomes if we let it. For this reason we must evaluate not the technology itself but who is creating this tech.

Leave a Reply