Only just over a year to go, folks and the U.S. But keep in mind the dire prediction back in 2014 that robots would steal one in three human jobs by 2025. Of course, scammers and propagandists will simply ignore watermarking when they create their misleading deepfakes.īiden's order also directs various federal agencies to address the problem of AI "job displacement" and "job disruption." And doubtlessly, such a powerful suite of technologies will affect nearly everyone's work activities and prospects. As it happens, AI companies like OpenAI and Google are already doing that. This means embedding information into photos, videos, audio clips, or text to let users know that they were generated by AI. In addition, the executive order directs the Department of Commerce to develop techniques for watermarking the outputs of AI technologies. artificial intelligence developers hampered by bureaucratic fetters. regulations will not apply to foreign competitors who will be able to catch up and surpass U.S. national defense can be enhanced by slowing down domestic AI innovation. An even bigger worry is that the new AI safety testing orders will quickly evolve into the digital equivalent of the deadly slow hyper-precautionary FDA drug safety approval scheme. Complying with such reporting requirements will likely slow down the process of safety and security testing undertaken by Big Tech developers while at the same time driving out smaller competitors who cannot afford the costs of dotting regulatory i's and crossing bureaucratic t's. The National Institute of Standards and Technology is charged with setting up the additional safety standards with which the AI developers are supposed to comply. As it happens, the leading AI tech companies- OpenAI, Google, Meta-have been red-teaming their models all along. Red-teaming is the practice of creating adversarial squads of hackers to attack AI systems with the goal of uncovering weaknesses, biases, and security flaws. Specifically, the new federal AI regulators are supposed to oversee any "foundation model" that purportedly "poses a serious risk to national security, national economic security, or national public health and safety" by requiring that developers report to the secretary of commerce the results of extensive "red-team safety tests." Roughly speaking, foundation models are large language models like OpenAI's GPT-4, Google's PaLM-2, and Meta's LlamA 2. government," according to the White House. As my Reason colleague Eric Boehm has pointed out, "the Defense Production Act has become a license for central planning." Taken as a whole, the new order amounts to federal central planning for artificial intelligence.Īmong other things, the order will "require that developers of the most powerful AI systems share their safety test results and other critical information with the U.S. The executive order cites the emergency powers of the Korean War-era Defense Production Act as the justification for imposing federal regulation on AI technologies. President Joe Biden issued yesterday a sweeping executive order aiming to impose federal regulation on the development of artificial intelligence technologies, such as large language models like ChatGPT.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |