A few key themes are emerging.

Sarah Rogers/MITTR | Getty
This article is from The Technocrat, MIT Technology Reviews’ weekly technology policy newsletter about power, politics, and Silicon Valley. To receive it in your inbox every Friday,sign up here.
Last week, Senate Majority Leader Chuck Schumer (a Democrat from New York) announced his grand strategy for AI policymaking in a speech in Washington, DC, ushering in what could be a new era for US technology policy. He outlined some key principles for regulating AI and argued that Congress should introduce new laws quickly.
Schumer’s plan is the culmination of many other minor political actions. On June 14, Senators Josh Hawley (a Republican from Missouri) and Richard Blumenthal (a Democrat from Connecticut) introduced a bill that would exclude generative AI from Section 230 (the law that protects online platforms from liability for content created by their users). On Thursday, the House science committee hosted a handful of AI companies to ask questions about the technology and the various risks and benefits it poses. House Democrats Ted Lieu and Anna Eshoo, with Republican Ken Buck, have proposed a National Commission on AI to manage AI policy, and a bipartisan group of Senators has suggested creating a federal office to encourage, among the other things, the competition with China.
Don’t settle for half the story.
Get paywall-free access to tech news for the here and now.
subscribe now
Already a subscriber? Registration
MIT Technology Review provides an independent, intelligent filter for the flow of information about technology.
subscribe now
Already a subscriber? Registration
While this flurry of activity is noteworthy, US lawmakers aren’t actually starting from scratch on AI policy. You’re seeing a group of offices develop individual interpretations of specific parts of AI policy, mostly that fall under some tie-in to their pre-existing problems, says Alex Engler, a fellow at the Brookings Institution. Individual agencies like the FTC, the Department of Commerce, and the US Copyright Office have responded quickly to the craze of the past six months, issuing policy statements, guidelines, and warnings about generative AI in particular.
Of course, we never really know whether talk means action when it comes to Congress. However, US lawmakers thinking about AI reflect some emerging principles. Here are three key themes in all this talk that you should know about to help you understand where US AI legislation might go.
- The US is home to Silicon Valley and prides itself on protecting innovation.Many of the biggest AI companies are American companies, and Congress won’t let you or the EU forget that! Schumer called the innovation the north star of US AI strategy, meaning regulators will likely ask tech CEOs how they would like to be regulated. It will be interesting to watch the tech lobby at work here. Some of this language arose in response to the latest European Union regulations, which some tech companies and critics say will stifle innovation.
- Technology, and artificial intelligence in particular, should be aligned with democratic values.We’ve heard it from top officials like Schumer and President Biden. The subtext here is the narrative that US AI companies are different from Chinese AI companies. (New guidelines in China dictate that Generative AI outputs must reflect communist values.) chips that power artificial intelligence systems and continue its escalating trade war.
- A big question: what happens to Section 230.A giant unanswered question for AI regulation in the US is whether or not we will see Section 230 reform. Section 230 is a 1990s US Internet law that protects tech companies from being sued in judgment for content on their platforms. But should tech companies have the same free get-out-of-jail pass for AI-generated content? This is a big question and would require tech companies to identify and label AI-created text and images, which is a huge undertaking. Since the Supreme Court recently declined to rule on Section 230, the debate has likely been sent back to Congress. Every time lawmakers decide whether and how the law should be reformed, it could have a huge impact on the AI ​​landscape.
So where is it going? Well, nowhere in the near term as the politicians turn up for their summer break. But starting this fall, Schumer plans to kick off invitation-only focus groups in Congress to look at particular parts of AI.
Meanwhile, Engler says we may hear some discussions about banning some applications of AI, such as sentiment analysis or facial recognition, that echo parts of the EU regulation. Lawmakers could also try to revive existing proposals for comprehensive technology legislation, such as the Algorithmic Accountability Act.
For now, all eyes are on Schumer’s big swing. The idea is to come up with something so complete and do it so fast. I expect there will be a significant amount of attention, Engler says.
What else am I reading
- Everyone is talking about Bidenomics, which is the specific brand of economic policy of the current presidents. Technology is at the heart of Bidenomics, with billions and billions of dollars poured into the industry in the United States. For an idea of ​​what that means on the ground, it’s worth reading this story from the Atlantic about a new semiconductor factory coming to Syracuse.
- AI detection tools try to identify whether online text or images were created by AI or by a human. But there’s a problem: they don’t work very well. New York Times reporters messed around with various instruments and ranked them based on their performance. What they found makes for sobering reading.
- Google’s advertising business is having a rough week. New research published by The Wall Street Journal found that around 80% of Google’s ad placements appear to violate its policies, which Google disputes.
What I learned this week
We may be more likely to believe the disinformation generated by AI, according to new research covered by my colleague Rhiannon Williams. Researchers at the University of Zurich found that people were 3% less likely to identify inaccurate tweets created by AI than those written by humans.
It’s just a study, but when backed up by further research, it’s a troubling finding. As Rhiannon writes, Thegenerative AI boom puts powerful and accessible AI tools in the hands of everyone, including bad actors. Models like GPT-3 can generate false text that looks convincing, which could be used to generate false narratives quickly and cheaply for conspiracy theorists and disinformation campaigns.
#Congress #regulate
Image Source : www.technologyreview.com