Lila Ibrahim of DeepMind: “It’s hard not to experience imposter syndrome”


Laila Ibrahim is the first ever chief operating officer Deep thinking, One of the most famous artificial intelligence companies in the world. She has no formal background in AI or research. This is the company’s main job, but she is responsible for managing half of its employees. This is a global team of about 500 people, including engineers and scientists.

They are working on a single, rather amorphous task: to build an artificial general intelligence, a powerful mechanical version of the human brain that can advance science and human development. Her task is to transform this vision into a structured operation.

“It’s hard not to experience imposter syndrome. I’m not an artificial intelligence expert, and I’m working with some super smart people here… Except for the first six minutes of some of our research sessions, it took me a while to understand Anything,” she said. “But I realized that I was not hired to be that expert. I was hired to bring my 30 years of experience. I understand the human aspects of technology and impact, and help us achieve this in a fearless way. An ambitious goal.”

The 51-year-old Lebanese-American engineer joined DeepMind in 2018 and moved her family from Silicon Valley to London, where she worked for Intel for 20 years, where she served as the chief operating officer of the online education company Coursera. Before leaving Intel in 2010, she was CEO Craig Barrett as chief of staff in an organization with 85,000 people and had just given birth to twins.

As an Arab-American and female engineer in the Midwest, Ibrahim was “always weird.” At DeepMind, she is also an outsider: she comes from the corporate world and has worked in Tokyo, Hong Kong and Shanghai. She also runs a non-profit organization, Team4Tech, It recruits volunteers from the technology industry to improve education in developing countries.

DeepMind is headquartered in Kings Cross, London, and is operated by Demis Hassabis and a leadership team that is mainly British. During the three years there, Ibrahim doubled its workforce in four countries to more than 1,000, and is solving some of the most difficult problems in artificial intelligence: how to make breakthroughs with commercial value? How do you expand talent channels in the most competitive technology job market? How do you invent responsible and ethical artificial intelligence?

The first challenge Ibrahim faced was how to measure the success and value of the organization when the organization does not sell tangible products. It was acquired by Google for 400 million pounds in 2014, and the company lost 477 million pounds in 2019. Its 266 million pounds of revenue that year came from other Alphabet companies such as Google, which paid DeepMind for any commercial AI applications developed in-house.

“Before serving on the board of a public company, I know the pressures Alphabet faces. In my experience, you often get tripped when the organization focuses on the short-term. Alphabet must consider the short-term and the long-term from a value perspective,” Ibrahim said. Xin said. “Alphabet sees DeepMind as an investment in the future of artificial intelligence, while providing some business value to the organization. Taking WaveNet as an example, DeepMind technology is now integrated into Google products [such as Google Assistant] And enter the Euphonia plan. “This is a voice-to-text service, where ALS [motor neuron disease] The patient can keep his voice.

These applications are mainly developed by the DeepMind4Google team, which is dedicated to commercializing its AI for Google’s business.

She insists that DeepMind has as much autonomy as its parent company “needs so far”, such as constructing its own performance management goals. “I have to say, I was curious when you joined, was it a little nervous? And not yet,” she said.

Another major challenge is to recruit researchers in the highly competitive job market. Companies such as Apple, Amazon, and Facebook are all competing for artificial intelligence scientists. Interestingly, according to reports, the salary of a senior scientist may be around 500,000 pounds, and a few can earn millions of pounds. “Deep Thinking [pay] Competitive, regardless of your level and position, but this is not the only reason people stay,” Ibrahim said. “Here, people care about the mission and see how the work they do advances the mission. [of building artificial general intelligence], Not only by itself, but also as part of a greater effort. “

The third challenge Ibrahim focuses on is translating ethical principles into the practicality of DeepMind’s artificial intelligence research. Researchers are increasingly emphasizing the risks posed by artificial intelligence, such as autonomous killer robots, and the use of facial recognition and other technologies to replicate human prejudices and privacy violations.

Ibrahim has been driven by the social impact of technology. At Intel, she participated in projects such as bringing the Internet to remote people in the Amazon rainforest. “When I interview Shane [Legg, DeepMind co-founder], I went home and thought, can I work in this company and let my twin daughters sleep at night to know what their mother is doing? ”

DeepMind’s sister company Google has been criticized for its approach to the ethical issues of artificial intelligence.Last year, Google allegedly fired two Ethical artificial intelligence researchers, Timnit Gebru and Margaret Mitchell, according to reports, they believe that language processing artificial intelligence (also developed by Google) can respond to human language bias. (Google describes Gebru’s departure as “resignation”.) The influence of the public has led to a crisis of faith in the artificial intelligence community: Are technology companies such as Google and DeepMind aware of the potential harms of artificial intelligence, and do they have the intention to mitigate these harms?

To this end, Ibrahim established an internal social influence team from different disciplines. It met with the company’s core research team to discuss the risks and impacts of DeepMind’s work. “You have to constantly re-examine these assumptions… and the decisions you make, and update your thinking on this basis,” she said.

She added: “If we don’t have expert opinions, we will bring in experts from outside DeepMind. We bring in people from the security field, privacy, bioethicists, and social psychologists. This is a cultural barrier. [scientists] Open up and say: “I don’t know how to use it, and I hardly dare to guess, because what if I guess wrong?” We did a lot of work to organize these meetings to ensure psychological safety. “

DeepMind is not always cautious: In 2016, it developed a video-based ultra-accurate AI lip-reading system that may be suitable for deaf and blind people, but it did not acknowledge the personal safety and privacy risks. However, Ibrahim said that DeepMind is now more considering the moral impact of its products, such as WaveNet, its text-to-speech system. “We did consider potential opportunities for abuse. Where and how we can mitigate them and limit its applications,” she said.

Ibrahim said that part of the job is to understand the problems that artificial intelligence cannot solve. “There are places where it should not be used. For example, monitoring applications is a problem [and] A deadly autonomous weapon. “

She added: “I often describe it as a moral calling. Everything I do prepares for this moment, to study the most advanced technology to date, and [on] understanding. .. How to use it. “

Three questions for Lila Ibrahim

Who is your leading hero?

Craig Barrett. I was Intel’s chief of staff, and he was the CEO at the time. He followed in the footsteps of Bob Noyse, Andy Grove and Gordon Moore. .. They are legends in the semiconductor industry. Together we have done a lot of pioneering work, such as how to connect the Internet to remote areas that have never been visited in the world. He would say: “If someone wants to scold you, let them come and talk to me because I support you.”

What was the first leadership lesson you learned?

Many people in the organization are questioning [my work]. I ran into some problems [Barrett’s] Direct subordinates and senior managers. He let me sit down and said, “Lila, Pathfinder always has more arrows in the back than in the front because everyone is trying to catch up.” He said, “Let me pull out those arrows so you can You can run farther and faster.” This is my way of leadership, and I want people to try and not be afraid to make mistakes. I was able to do this because my leader hero did it for me early in my career.

If you were not the CEO/leader, what would you be?

The first job I want is the President of the United States, but now it may be more like a diplomat. Getting people together and understanding their differences to move things forward is something I realize I have always been passionate about. It’s about finding similarities in clearly different places.



Source link

admin

Leave a Reply

Your email address will not be published. Required fields are marked *