Home Artificial Intelligence Securing the Artificial Intelligence Supply Chain May Require an Abundance of AI

Securing the Artificial Intelligence Supply Chain May Require an Abundance of AI

For Fowler, unsupervised learning, if the context permits, is the way to stop malicious interference. Still, this AI protects AI applications—and the systems linked to them—without the risks of starting from a data set.

In line with Fowler’s view, federal agencies, in coordination with the Information Technology Sector Coordinating Council, call for the adoption of artificial intelligence to control system security and employ it for detecting anomalous behaviors, according to a recent document.

When developing tools, AI has proven to be a performance booster.

DevSecOps, or development, security and operations, benefits from using AI in this key stage of any software production chain, including other AI tools. Paradoxically, leveraging AI entails risks.

If language models employed for simplifying programmers’ jobs are tainted with malicious or weak inputs, the results will poison all stages of the production chain after this code is used.

One option is to assist programmers with AIs that will not take training data from large sets that could bring new risks but instead use sets from controlled enterprise environments where results have an extra layer of safety.

Nevertheless, despite some companies adopting these safeguards, production entails more than just coding, and technologies should be safely implemented, spanning full production chains.

“I can crank out code really fast and then throw it over the wall to a test team that can’t test it that fast. It doesn’t matter,” said Joel Krooswyk, federal chief technology officer at Gitlab.

Beyond the technical specifications to run the AI models, the federal government has a set of criteria that hardware must meet. One of the keys is complying with the U.S. origin of critical components and supply chain traceability rules, according to Fowler. Cloud services add an extra layer of security requirements.

But vigilance doesn’t end once compliant hardware is plugged in and playing.

“A compromised piece of hardware isn’t going to act strange when you plug it in. If the attacker is really thinking about it, they’re going to have a time delay; they’re going to have it when you’re no longer vetting and validating it,” Fowler told SIGNAL Media in an interview.

Therefore, a part of hardware performance relies on the same AI it runs to keep it safe.

 

Reference

Denial of responsibility! TechCodex is an automatic aggregator of Global media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, and all materials to their authors. For any complaint, please reach us at – [email protected]. We will take necessary action within 24 hours.
DMCA compliant image

Leave a Comment