As AI adoption accelerates, government can take more steps to protect systems, data
Stock image via Adobe.

3D rendering of female robot artificial intelligence concept.
Deloitte outlines the approach to safeguarding against a cyberattack.

Artificial intelligence (AI) has long seemed like a futuristic concept. But whether we notice it or not, AI is deeply engrained in our day-to-day lives – from song playlists and virtual assistants to rideshare apps and fraud detection on bank accounts.

Hospitals use AI to improve patient care. At the University of Florida, surgeons utilize AI to help them predict and mitigate postoperative complications and enhance patient results.

Many universities, including Florida Atlantic University (FAU), have incorporated AI into their research and education methods, with FAU employing AI for engineering, medical research and other areas of study.

“AI technology is increasing efficiencies in many ways and helping leaders solve some of their most pressing challenges,” said Dean Izzo, a client relationship executive at Deloitte Consulting, LLP. “However, the widespread adoption of AI and machine learning (ML) models across industries, including government, can increase vulnerabilities to adversarial attacks, such as cybersecurity breaches, privacy attacks and intellectual property theft.”

According to Deloitte: “As AI/ML solutions proliferate, the attacks on such systems also multiply.”

So, what can the government do to protect its systems and data from bad actors?

Deloitte Insights recently published an article, “Securing government against adversarial AI,” which outlines a three-pronged approach for governments and organizations to safeguard against cyberattacks:

— Cross-train the workforce to bridge the gap between AI/ML and cybersecurity expertise — the intersection of disciplines provides the best defense against adversarial attacks.

— Set security standards and bring in specialists to evaluate the security of AI/ML models and suggest countermeasures and risk mitigations.

— Secure the model development life cycle by adopting the tools, techniques, and standards developed in the rapidly evolving ecosystem around adversarial AI and adopting the most relevant and trusted tools and frameworks.

There’s no question a well-trained workforce is essential to securing systems and data. Yet, according to Deloitte, there is a gap in the cyber workforce.

In Florida, high schools, colleges and universities are expanding their technology curriculum to include AI training. This Fall, students in Florida high schools will have the chance to study AI as part of the Florida Department of Education Division of Career and Technical Education Program.

Meanwhile, the University of South Florida offers a new graduate certificate in AI to respond to the rising demand for a workforce with enhanced technology skills.

For employers, it’s important to cross-train workers in technology roles across the organization. “Collaboration tools and governance workflows can support coordination across the various personas and ensure that development decisions consider security principles throughout the ML life cycle,” according to Deloitte.

However, skilled and trained team members cannot succeed without security standards and leadership support.

“As part of model governance, organizations should develop and maintain a counter–adversarial AI framework,” Deloitte recommends.

An assessment of all AI projects is necessary for chief security officers, chief data officers and other government leaders to identify their mission-critical systems and data and evaluate the ecosystem for adversarial AI solutions when budgeting for model improvements and protections. Measuring the vulnerability and the potential impact of adversaries infiltrating real-world systems is critical.

Finally, Deloitte recommends that organizations move beyond traditional defense protections.

Since AI models are vulnerable to threats outside the traditional hardware-software dichotomy, defense techniques such as adversarial training are essential. Through this process, an AI model is trained with “adversarial examples” and taught to ignore the noise.

“Securing government against adversarial AI” encourages organizations and government entities to deploy protections for their AI applications and models.

“As AI algorithms are increasingly incorporated into business and mission processes, organizations need to monitor the developments and investments in this space to stay up to date on current AI security and make strategic investments to help protect their models.”

To explore Deloitte’s insights on Securing Government Against Adversarial AI, visit: deloitte.com.

Peter Schorsch

Peter Schorsch is the President of Extensive Enterprises Media and is the publisher of FloridaPolitics.com, INFLUENCE Magazine, and Sunburn, the morning read of what’s hot in Florida politics. Previous to his publishing efforts, Peter was a political consultant to dozens of congressional and state campaigns, as well as several of the state’s largest governmental affairs and public relations firms. Peter lives in St. Petersburg with his wife, Michelle, and their daughter, Ella. Follow Peter on Twitter @PeterSchorschFL.



#FlaPol

Florida Politics is a statewide, new media platform covering campaigns, elections, government, policy, and lobbying in Florida. This platform and all of its content are owned by Extensive Enterprises Media.

Publisher: Peter Schorsch @PeterSchorschFL

Contributors & reporters: Phil Ammann, Drew Dixon, Roseanne Dunkelberger, A.G. Gancarski, Anne Geggis, Ryan Nicol, Jacob Ogles, Cole Pepper, Gray Rohrer, Jesse Scheckner, Christine Sexton, Drew Wilson, and Mike Wright.

Email: [email protected]
Twitter: @PeterSchorschFL
Phone: (727) 642-3162
Address: 204 37th Avenue North #182
St. Petersburg, Florida 33704




Sign up for Sunburn


Categories