Exclusive: Google promises changes to oversight of investigation after internal revolt


(Reuters) – Alphabet Inc’s Google will change procedures before July to review the work of its scientists, according to a city council recording heard by Reuters, as part of an effort to quell internal turmoil over the integrity of its artificial intelligence research. (IA).

FILE PHOTO: Google’s name is displayed outside the company’s London, UK office on November 1, 2018. REUTERS / Toby Melville

Speaking at a staff meeting last Friday, Google Research executives said they were working to regain trust after the company toppled two prominent women and rejected their work, according to an hour-long recording, the content of which was confirmed. by two sources.

Teams are already testing a questionnaire that will assess project risks and help scientists navigate the reviews, the research unit’s chief operating officer Maggie Johnson said at the meeting. This initial change will be implemented by the end of the second quarter, and most of the documents will not require further investigation, he said.

Reuters reported in December that Google had introduced a “sensitive topic” review for studies involving dozens of problems, such as China or biases in its services. Internal reviewers had demanded that at least three articles on AI be amended to refrain from projecting Google’s technology in a negative light, Reuters reported.

Jeff Dean, Google’s senior vice president who oversees the division, said Friday that the “sensitive issues” review “is and was confusing” and that he had commissioned a senior director of research, Zoubin Ghahramani, to clarify the rules, according to the recording.

Ghahramani, a Cambridge University professor who joined Google in September from Uber Technologies Inc, said during town hall: “We need to get comfortable with that discomfort” of self-critical research.

Google declined to comment on Friday’s meeting.

An internal email, seen by Reuters, offered new details about Google researchers’ concerns, showing exactly how Google’s legal department had modified one of the three artificial intelligence documents, called “Extracting Training Data from Models of big language. ” (bit.ly/3dL0oQj)

The email, dated February 8, from a newspaper co-author, Nicholas Carlini, was sent to hundreds of colleagues, seeking to draw their attention to what he called “deeply insidious” edits from company lawyers.

“Let’s be clear here,” the roughly 1,200-word email read. “When we as academics write that we have a ‘concern’ or find something ‘concerning’ and a lawyer for Google requires us to change it to make it sound better, it means Big Brother steps in.”

The required edits, according to his email, included “negative to neutral” exchanges, such as changing the word “concerns” to “considerations” and “dangers” to “risks.” The lawyers also demanded to remove references to Google technology; the authors’ finding that AI leaked copyrighted content; and the words “infringement” and “confidential,” read the email.

Carlini did not respond to requests for comment. Google, in response to questions about the email, questioned his claim that the lawyers were trying to control the tone of the newspaper. The company said it had no problems with the topics investigated by the newspaper, but found some misused legal terms and, as a result, conducted a thorough review.

RACIAL HERITAGE AUDIT

Last week, Google also appointed Marian Croak, a pioneer in internet audio technology and one of Google’s few black vice presidents, to consolidate and manage 10 teams that study topics such as racial bias in algorithms and technology for the disabled.

Croak said at Friday’s meeting that it would take time to address concerns among AI ethics researchers and mitigate the damage to Google’s brand.

“Please hold me fully responsible for trying to change that situation,” he said on the recording.

Johnson added that the AI ​​organization is hiring a consulting firm for a wide-ranging racial equity impact assessment. The first audit of its kind for the department would lead to recommendations “that are going to be quite difficult,” he said.

Tensions in Dean’s division deepened in December after Google fired Timnit Gebru, co-head of its ethical AI research team, following his refusal to retract an article on language-generating AI. Gebru, who is black, accused the company at the time of reviewing her work differently due to her identity and of marginalizing employees from underrepresented backgrounds. About 2,700 employees signed an open letter in support of Gebru. (bit.ly/3us5kj3)

During town hall, Dean explained what scholarships the company would support.

“We want responsible AI research and ethical AI,” Dean said, setting the example of studying the environmental costs of technology. But it’s problematic to cite data “with a factor of almost a hundred” while ignoring more accurate statistics, as well as Google’s efforts to cut emissions, he said. Dean has previously criticized Gebru’s article for not including important findings on environmental impact.

Gebru defended the citation of his article. “It is really a bad aspect for Google to go on the defensive against a document that was cited by many of its peer institutions,” he told Reuters.

Employees continued to post about their frustrations for the past month on Twitter as Google investigated and later fired ethical AI co-leader Margaret Mitchell for moving electronic files out of the company. Mitchell said on Twitter that she took action “to raise concerns about racial and gender inequality, and to speak out about the problematic firing of Dr. Gebru by Google.”

Mitchell had contributed to the article that sparked Gebru’s departure, and a version that was published online last month without Google’s affiliation called “Shmargaret Shmitchell” as a co-author. (bit.ly/3kmXwKW)

When asked for a comment, Mitchell expressed his disappointment at Dean’s criticism of the newspaper through an attorney, saying that his name was removed by order of the company.

Reporting by Paresh Dave and Jeffrey Dastin; Editing by Jonathan Weber and Lisa Shumaker

.

Source link