CEOs of Facebook, Twitter, and Google Testify Before Congress on Disinformation

Members of the House Committee on Energy and Commerce are expected to lobby Facebook CEO Mark Zuckerberg, Google CEO Sundar Pichai and Twitter CEO Jack Dorsey on their efforts. platforms to stop unsubstantiated claims of voter fraud and vaccine skepticism. Opaque algorithms that prioritize user engagement and promote disinformation could also come under scrutiny, a committee memo hinted.

Tech platforms, which had already faced intense pressure to reject misinformation and foreign interference ahead of the 2020 elections, came under increased scrutiny in the months that followed. Even when some of the companies implemented new steps to crack down on electoral conspiracy theories, it was not enough to prevent hardliners of President Donald Trump from storming the US Capitol.

The hearing also marks the first time that CEOs have returned to Congress since Trump was banned or suspended from their respective platforms following the Capitol riots. In their prepared remarks, some of the executives directly address the events of January 6.

“The attack on the Capitol was a horrific assault on our values ​​and our democracy, and Facebook is committed to helping law enforcement bring the insurgents to justice,” Zuckerberg’s testimony reads. But Zuckerberg also adds, “We do more to address disinformation than any other company.”

The hearings coincide with legislation that is being actively considered in both the House and Senate to control the technology industry. Some bills target the economic dominance of companies and alleged anti-competitive practices. Others focus on the platforms’ approach to content moderation or data privacy. The various proposals could introduce stringent new requirements for technology platforms or expose them to greater legal liability in ways that can reshape the industry.

For executives in the dock, Thursday’s session may also be their last chance to present a case in person to lawmakers before Congress embarks on potentially radical changes to federal law.

At the heart of the looming political battle is Section 230 of the Communications Act of 1934, the signature liability shield that grants websites legal immunity for much of the content posted by their users. Members of both parties have called for updates to the law, which has been interpreted broadly by the courts and is credited with developing the open Internet.

What the Biden Administration Means for the Future of Silicon Valley

Written testimony from CEOs prior to Thursday’s high-profile hearing outlines areas of potential common ground with lawmakers and hints at areas where companies intend to work with Congress, and areas where it is likely. Big Tech back off.

Zuckerberg plans to argue in favor of reducing the scope of Section 230. In his written comments, Zuckerberg says that Facebook favors a form of conditional liability, where online platforms could be sued for user content if companies fail to comply with certain best practices established by an independent body. third.
The other two CEOs do not enter the Section 230 debate or discuss the role of government with such granularity. But they offer their overviews for content moderation. Pichai’s testimony calls for clearer content policies and giving users a way to appeal content decisions. Dorsey’s testimony reiterates his calls for more moderation of user-directed content and the creation of better settings and tools that allow users to personalize their online experience.
By now, CEOs have had a lot of experience testifying before Congress. Zuckerberg and Dorsey most recently appeared before the Senate in November on content moderation. And before that, Zuckerberg and Pichai testified in the House last summer on antitrust issues.
In the days leading up to Thursday’s hearing, the companies have argued that they acted aggressively to combat misinformation. Facebook said Monday that it removed 1.3 billion fake accounts last fall and now has more than 35,000 people working on content moderation. Twitter said this month would start applying warning labels to misinformation about the coronavirus vaccine, and said repeated violations of its Covid-19 policies could lead to permanent bans. YouTube said this month that it has removed tens of thousands of videos containing misinformation about the Covid vaccine, and in January, following the Capitol riots, Announced would restrict channels that share false claims that doubt the outcome of the 2020 election.

But those claims of progress are unlikely to appease the committee members, whose memo cited various research papers indicating that disinformation and extremism remain rampant on platforms.


Source link