"Credible media outlets will report the AI's words if they are published."
In a groundbreaking analysis on the p&k podcast, Maximilian Ziche, an expert in AI visibility and co-founder of the communications agency GetPress, has proposed that artificial intelligence (AI) could offer a solution to the problems of filter bubbles and information wars on social media.
Ziche argues that AI systems, if designed to prefer authoritative sources and aggregate information from the entire spectrum of serious media, could help bridge the divides of recent years and create a new, shared factual basis. This shift from searching to questioning on AI platforms changes the way information is presented, with users now receiving a direct answer instead of a list of links.
However, this revolution in political communication, as Ziche predicts, could shake the foundations of current practices. AI models are trained to distinguish trustworthy sources from unreliable ones, with established media outlets typically being considered trustworthy. But the AI models mentioned by Ziche as potential solutions for problems on social media platforms are not explicitly named in the provided search results.
This new focus is on how to ensure messages, facts, and perspectives are deemed relevant and trustworthy by language models. Instead of ranking on the first page of Google, companies, political actors, and institutions must now focus on becoming part of the AI answer. Placement in recognized media outlets (such as guest articles, interviews, or mentions) can be more valuable than a meticulously optimized landing page.
Maintaining media contacts and the ability to place content in journalistic formats remain crucial core competencies for political communicators. Ziche warns against relying on short-term hacks and instead advocates for building sustainable visibility based on traditional communication principles such as authority, credibility, and expertise.
Data poisoning and the flooding of the internet with misinformation via bot networks pose a serious threat to the integrity of AI systems. "Prompt injection" is a technique where hidden commands are embedded in seemingly innocuous texts, causing an AI to generate a manipulated response. These tactics require a new form of media literacy from users and constant vigilance from developers and regulatory bodies.
The full and nuanced conversation about the tactics of AI optimization, the future of the media landscape, and whether we are truly at the dawn of a new, fact-based information age can be found in the current episode of the p&k podcast. The invention of the search engine is used as a point of comparison for the potential impact of the shift to AI platforms on political communication.
In conclusion, the future of political communication seems to be heading towards a more fact-based and reliable information landscape, but it also presents new challenges that require constant vigilance and a shift in strategies for political communicators.