Home Artificial Intelligence Kansas consider regulating AI in campaigns; it’s already being used

Kansas consider regulating AI in campaigns; it’s already being used

A bill that would have prohibited deceptive artificial intelligence-generated campaign advertisements didn’t pass by the turnaround deadline in the Kansas House, but the rapidly evolving technology is already in use in campaigns in several different ways.

Pat Proctor, R-Leavenworth, introduced House Bill 2559, along with House Minority Leader Vic Miller, D-Topeka. The bill wouldn’t ban AI in campaign ads, but it would make it a misdemeanor crime to create an advertisement that gives a false representation of a candidate without disclosing that it is artificially generated in the advertisement.

AI-generated political ads have already hit the airwaves over the past year.

In April the Republican National Convention released an attack ad on President Joe Biden that’s totally artificially generated. In July, a pro-Ron DeSantis super PAC released an ad that included artificially generated audio of former President Donald Trump. A consultant for longshot Democratic presidential candidate Dean Phillips’s commissioned robocalls urging voters not to participate in the New Hampshire primary using AI to mimic Biden’s voice.

“Nobody knows what to do about it, but everybody’s worried,” Proctor said. “I felt like we needed to try to do something this year, at least to alert voters when they were consuming media that was created with generative AI.”

Though the bill didn’t pass its chamber of origin, Proctor said there’s still a chance the mirror bill proposed in the Kansas Senate could pass this year. Some senators, he said, were concerned they were moving too fast to address the novel issue of artificial intelligence.

“I don’t think that we can move fast enough on this because I guarantee you that right now campaigns, PACs and maybe even parties are looking at ways to use this technology,” Proctor said, “and we need to put up some guard rails.”

The rise of AI

More rudimentary forms of artificial intelligence existed for decades before the latest wave of AI products. Earlier versions had logic that was explicitly programmed by its creators, but new large-language models like OpenAI’s Chat GPT use machine learning to refine itself.

“It’s really the fact that the machines, or the systems, can now learn from data rather than having to be explicitly programmed by humans,” said Ross Gruetzemacher, a professor at Wichita State University who studies AI.

Humans aren’t programming the logic, but they are curating the data that AI systems feed off of, getting it to follow instruction and fine-tuning it to reflect the preferences of its users.

“Sometimes these models come off training and they’re inherently not well aligned with human values,” Gruetzemacher said. “They reflect a lot of stuff that’s in Reddit threads, things that are biased or bigoted, and then you have to go and say no this is wrong.”

AI models can also be inaccurate. Chatbots repeatedly misstated election information for upcoming primary elections. The guardrails in place may also cause issues, like Google’s Gemini AI’s creation of ahistorical images that have been shared across the internet in the past week.

The fallibility of the technology could lead to dishonest depiction in political campaigns without human direction.

“If there isn’t fine-tuning of the model to make or remove a lot of the stuff that it learned from the internet, then it could have outputs that are consistent with negative political ads, or it could learn to potentially message the truth in ways that are in a gray area ethically but are commonly done in political ads,” Gruetzemacher said.

The technology could also erode trust in candidates themselves. General counsel for the Kansas Secretary of State’s Office said the nature of AI could lead to what’s called a liar’s dividend, where candidates can dismiss potentially harmful information as artificial intelligence. Offending campaigns could also shirk responsibility and blame missteps on the technology.

“At some point you’re going to get content that does get into ethical gray areas or crosses ethical lines,” Gruetzemacher said, “and then there’s a lack of accountability for the politicians.”

AI already used in Kansas politics

You don’t need to look for hypothetical ads to find AI in Kansas politics, it’s already being used in political campaigns. On Friday, the Kansas Republican Party announced a partnership with Numinar, an AI-assisted company specializing in creating voter profiles to aid campaigns.

Numinar uses data it receives from the Republican National Convention as a way to contact and survey voters. It then uses the information to create predictive models that show how likely a person is to vote and what issues are important to them.

“It’s not that voter scoring hasn’t ever been done before. It’s just that the process has usually been restricted to analytics firms. And those are often more cost prohibitive for campaigns,” a Numinar spokesperson said. “A lot of times people want to know what issue matters most in an election. Crime? Being pro-choice or pro-life? When they get that data back from people, it can help them adjust their own messaging so that they’re reaching the right voters with the right message.”

Numinar said content development and messaging are still going to be the sole responsibility of candidates. Last March, Numinar CEO Will Long told Reuters that it experimented with AI-generated audio and images, but the spokesperson said the priority for the company right now is building its content suite rather than generative AI.

Gruetzemacher’s immediate concern with AI in politics is with generative content, more so than the microtargeting of voter demographics that Numinar does. If microtargeting is joined with generative content, he thinks it would be difficult for a campaign to remain transparent in its messaging.

He also sees the boundless potential of the technology, even if there are issues that need to be accounted for.

“What we’re going to have to do this time is deal with even more risks and manage them quickly,” Gruetzemacher said. “If we can do that, I think there’s a lot more potential than even the computer or the internal combustion engine and potentially more than electricity.”



Denial of responsibility! TechCodex is an automatic aggregator of Global media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, and all materials to their authors. For any complaint, please reach us at – [email protected]. We will take necessary action within 24 hours.
DMCA compliant image

Leave a Comment