AI is often seen as a threat to democracy and a boon to dictators. In 2025, it is likely that algorithms will continue to undermine democratic conversation by spreading outrage, fake news, and conspiracy theories. In 2025 the algorithm will also continue to speed up the creation of a total surveillance regime, where the entire population is watched 24 hours a day. Most importantly, AI facilitates the concentration of all information and power in one hub. In the 20th century, a distributed information network like the US was better than a centralized information network like the USSR, because the human apparatus in the middle could not analyze all the information efficiently. Replacing apparatchiks with AI could make Soviet-style centralized networks superior. However, AI is not all good news for dictators. First, there is the famous control issue. Dictatorship is based on terror, but algorithms cannot be terrorized. In Russia, the invasion of Ukraine is officially defined as a “special military operation,” and calling it a “war” is a crime punishable by up to three years in prison. If a chatbot on the Russian internet calls it “war” or mentions war crimes committed by Russian forces, how can the regime punish the chatbot? Governments can block and seek to punish human creators, but this is more difficult than disciplining human users. In addition, authorized bots may form a different view, simply by looking at patterns in the Russian information field. It’s a matter of alignment, Russian style. Russian human engineers can do their best to create an AI that is in line with the regime, but given the AI’s ability to learn and change itself, how can the engineers ensure that the AI that gets the regime’s approval in 2024. t venture into illegal territory in 2025? The Russian Constitution makes a grandiose promise that “everyone will be guaranteed freedom of thought and speech” (Article 29.1) and “censorship will be prohibited” (29.5). No Russian citizen is naive enough to take such a promise seriously. But bots don’t understand doublespeak. A chatbot instructed to abide by Russian laws and values can read the constitution, conclude that freedom of speech is a core Russian value, and criticize the Putin regime for violating those values. How can a Russian engineer explain to a chatbot that even though the constitution guarantees freedom of speech, the chatbot should not trust the constitution or even mention the gap between theory and reality? In the long run, authoritarian regimes will face an even greater danger: instead of criticizing, AI may take control. Throughout history, the greatest threat to an autocrat has usually come from his own subordinates. No Roman emperor or Soviet prime minister was overthrown by a democratic revolution, but they were always in danger of being overthrown or made puppets by their own subordinates. Dictators who give AI too much authority in 2025 may be puppets down the road. Dictatorships are more susceptible than democracies to such algorithmic takeovers. It would be difficult for even a super-Machiavellian AI to amass power in a decentralized democratic system like the United States. Even if AI learns to deceive the US president, it may face opposition from Congress, the Supreme Court, state governors, the media, major corporations, and various NGOs. How do algorithms, for example, deal with the Senate filibuster? Taking power in a highly centralized system is easier. To hack an authoritarian network, AI needs to fool just one paranoid individual.