The current discussion of AI is somewhat similar to the process of buying a new car. Before driving a car and discovering what it can do, buyers spend a lot of time and energy on bargaining and choosing.
At the heart of the debate is whether AI has the right safeguards to ensure reliable use in the future, as concerns remain about AI driven face recognition technology and gender or racial bias in recruitment.
These concerns were further highlighted by MIT technical review at EMTECH digital 2019, an annual conference in San Francisco. There are some important topics in the conference on AI chipsets and robotics, such as "the ethics of AI", "the impact of AI on human beings" and "the benefits of AI".
"AI has great potential and risks," Harry Shum, executive vice president of Microsoft's AI and research group, said in a recent speech. We need to integrate social responsibility into the structure of technology. "
Avoid machine induced war
It turns out that the engineering responsibility of AI is a tough problem. Microsoft recently dealt with its work for the U.S. government.
Last October, Brad Smith, President of Microsoft, published a blog post outlining the company's philosophy of providing artificial intelligence technology to the U.S. military. "No army in the world wants to wage a war because of intelligent machines," he said But he also warned that people who know the most about technology should pay more attention.
Recently, some Microsoft employees asked executives to terminate another contract to provide augmented reality technology to the military. Satya NADELLA, Microsoft's chief executive, declined the request.
"We will provide our technology to the U.S. government and military." Harry Shum, executive vice president of Microsoft's AI and research group, stressed Microsoft's position during the Q & a session.
The Pentagon uses Google's AI products
Providing artificial intelligence products to the U.S. military is also a problem for Google. Last year, it was reported that Google was providing the Pentagon with its artificial intelligence technology to analyze the video data of drones. Google responded to the pressure of its employees and said it would not renew its military contract.
But the controversy has not disappeared. The Pentagon has banned 5000 pages of Google's AI related documents, known as the Maven program, from public disclosure under the freedom of Information Act, the source said.
Kent walker, Google's senior vice president of global affairs, stressed at the EMTECH conference that the company is having important internal discussions on the use of artificial intelligence and its potential impact on society.
In December, Google released details of a formal review structure to make decisions around what it considers "appropriate" use of AI. "We think it's important to have a rigorous internal review," Walker said He cited the decision to launch lip reading technology but not facial recognition tools to illustrate the company's position in the AI debate. "This is an example of all the discussions we have every day," he said
New AI Advisory Group
Google is also interested in expanding the range of AI conversations. Walker recently announced that the company is setting up an external advisory committee to assist in the future deployment of artificial intelligence. "The next step will be based on our collaboration with key stakeholders around the world," he said
Li Feifei, a former Google executive in the field of artificial intelligence, left in 2018 after a dispute over the Maven project. Li Feifei is an experienced technical expert in the field of artificial intelligence. She once served as the joint director of the human centered Artificial Intelligence Research Institute of Stanford University.
Despite leaving Google, Mr. Li couldn't get rid of the controversy surrounding his career. When the new Institute was launched on March 18, the media pointed out that the 121 initial members were mainly white and male.
"It keeps me awake. We don't have enough diversity and inclusiveness in this field," Li said
Progress of artificial intelligence in automobile and construction industry
While ethics and diversity are the main themes of the meeting, there are also signs of progress in some important aspects of artificial intelligence.
In the field of automatic driving, Alphabet Waymo, chief technology officer and vice president of Engineering Dmitri Dolgov, reported that the deployment of the company's autopilot vehicle is progressing rapidly. Waymo has traveled 10 million miles on roads in 25 cities, including Phoenix, and offers small-scale automated rides.
Dolgov said, "autopilot cars are not as tired as humans, distracted, and do not text while driving, nor are they drunk. We've been tweaking our intelligent systems to use the most robust algorithms. It's not about time, it's not about if, it's about how fast we grow. "
AI is also making progress in areas such as construction. Andrew agnost, CEO of Autodesk, describes how his company collects data from RFID tags, UAV on-site monitoring information and checklists to improve the design and construction of buildings using artificial intelligence.
"Construction can be a sloppy, poorly managed, low-precision process," Anagnost said. We now collect a lot of data on the construction site. If we can get information and layers based on feasible insights, we can make buildings major change in the way we work. "
While there are concerns that using AI will occupy most of the meeting, researchers seem optimistic about the potential of the field.
"There may be tensions between the concept of responsible innovation and responsible AI, but AI is likely to solve some of the challenges we face," said Google's walker