Newsroom
Preparing for the future has always been our leitmotiv, and as we want to bring trust in Digital, we need to ask ourselves the right questions and anticipate for some unseen obstacles: What if the technology gets out of control? How will climate change shape tomorrow’s world? How do we, as humans, interact with each other in the age of AI? We’ve thought some of these questions through with Usbek&Rica and imagined futuristic scenarios describing what the future could look like, based on the choices we make today.
------------------------------
I didn't understand anything anymore. Neither to my bill, nor to its issues. I never thought it would be like this when I became a parliamentarian..
AI… For lack of anything better?
Yet I got involved in politics to get things going. But I was overwhelmed. The parliamentary agenda was full of draft texts of all kinds. Since the vote of the laws for the moralization of political life in 2027, we have been deprived of human parliamentary assistants. Supposedly it would limit the risk of fictitious jobs. Since we were also prohibited from using consulting firms, we had no choice: only the conversational assistants remained. The AI was to solve all our parliamentary productivity problems and allow us to achieve our reform objectives in the Key Parliamentary Indicators dashboard.
What AI? Did I have a choice?
I admit that I quickly lost my footing. AI did it all. My speeches, my questions to the government, my amendments, my contributions to parliamentary reports. At first I was trying to add my "trademark"... But I was overwhelmed. So I didn't touch anything. Everyone did the same! Of course, the use of AI was the big unsaid. Everyone was doing it, no one talked about it. In front of the cameras, we fervently repeated the speech and the talking points drafted by the AI. Which AI by the way? The state had considered a "public AI", but had to give in for budget reasons (even if officially we had blamed Brussels). So we had the choice between the two big models, "Microsoft or Google". Anyway, except for two words, they came out with the same texts…
A driverless train?
And then my parliamentary group wanted to propose a text on the regulation of AI. My group leader asked me to prepare it. I thought it was a joke! I tried to audition engineers, but most gave me answers formulated by the AI. In the end, a retired engineer told me that attempts to make AI "explainable" had failed in the years 2023-26. Since then, no one understood anything.
Who will guard the guards?
When I started asking the queries to "frame the AI", the answers seemed curious, disturbing to me. It was implied in the text that there was a need to better "regulate attempts to regulate AI". Everything was written in an abstruse and recursive way. Even worse than usual! I wasn't going to make it!
Anyway, I was removed from the project: I am under investigation for corruption and misuse of corporate assets. Last month, my son took one of my pens to go to school and the IA of the Ministry of Moralization of Public Life decided that an indictment was necessary...
