AI EO from WH es BS – Heaven32

A decree was just got out

from the White House on “Using Reliable Artificial Intelligence in Government”. Leaving aside the baseless presumption of the government’s own reliability and that it Software having trust issues, the order is almost entirely hot air.

The OE is like the others in that it is limited to what a president can peremptorily force federal agencies to do, and that is not much in practice. He “orders federal agencies to be guided” by nine principles, revealing the level of impact on site. Please, agencies, let yourself be guided!

And then, of course, all military and national security activities are excepted, this is where AI systems are most dangerous and surveillance is most important. No one cares what NOAA is doing with artificial intelligence, but they are very concerned about what the three-letter agencies and the Pentagon are doing. (They have their own self-imposed rules.)

Principles are like a wish list. The AI ​​used by the federal government must be:

legal; determined and performance oriented; precise, reliable and efficient; safe, protected and resistant; understandable; responsible and traceable; regularly monitored; transparent; and responsible.

I would challenge everyone to find some massive deployment of AI that are all of those things, all over the world. Any agency that claims that an AI or machine learning system it uses adheres to all of these principles, as detailed in the OE, should be treated with extreme skepticism.

It is not that the principles themselves are bad or unnecessary; It is certainly important that an agency can quantify the risks when considering using AI for something, and that there is a process in place to monitor its effects. But an executive order does not accomplish this. Strong laws, probably starting at city and state level, have already shown what it’s like to hold AI accountable, and while a federal law is unlikely to emerge anytime soon, it does not replace a full bill. It’s too obvious in almost everything. In addition, many agencies have already adopted “principles” like in recent years.

In fact, the only thing the OE does is force every agency to produce a list of all the uses they make of AI, regardless of how it is defined. Of course, it will be over a year before we see this.

Within 60 days of ordering, agencies will choose the format for this AI inventory; 180 days after that, the inventory must be completed; 120 days later, the inventory must be completed and reviewed to verify consistency with the principles; plans to align with them the systems that agencies must “strive” to achieve within 180 days; meanwhile, within 60 days of finalizing inventories, they must be shared with other agencies; then within 120 days of completion, they are to be shared with the public (minus anything sensitive to law enforcement, national security, etc.).

In theory, we could have these inventories in a month, but in practice, we’re looking at around a year and a half, at which point we’ll have a look at the previous administration’s AI tools, with all the juicy stuff removed. at your discretion. Still, it might be an interesting read depending on what exactly he understands.

This decree is, like others of its kind, an attempt by the White House to present itself as an active leader in something that is almost completely beyond their control. Certainly, the development and deployment of AI must be done according to common principles, but even if these principles could be posed from top to bottom, this cowardly and slightly restrictive gesture which in a way obliges certain agencies to put themselves to the ear. swearing that thinking a lot about them is not the way to do it.

Leave a Reply

Your email address will not be published. Required fields are marked *