America the AI Leader?
- Muhsinah Morris
- Jul 23
- 5 min read
Thoughts by Dr. Muhsinah Morris with the help of Open AI and AI Assist on Adobe.
It is late on a Wednesday night, and I should be asleep. Instead, I am blogging about America's new AI Action Plan that came out today. Want to read it? Here is the link.

Yesterday, however, our fearless leader and President Donald Trump, reposted an AI-generated movie that depicted President Barack Obama being arrested in his presence while he is smiling. Furthermore, the current President Donald Trump posted this horrendous video on his social media account. Before I dive into why this is so dangerous, let me tell you a few things about myself.
I hate political discussions with a passion because people believe what they want.
I equally despise religious discussions because of the same reason mentioned above.
I love technology for the way it can enhance our lives and make them easier.
I am a mom first and foremost and want the best for children all over the globe.
I am an educator and innovator who would not have had the opportunities that I have been afforded had it not been for my wonderful country, the USA!
Our family business is the military, law enforcement, education, and telecom.
I have a special needs brother and child with autism who drive me to show up as my best self in the world and advocate for them at all costs.
Those are just a few things that you should know before I make my heartfelt statement.
My disappointment in our political leadership in regards to responsible AI use is heartbreaking. I always believe the best about people and pray for all of humanity because I know the miraculous people we are. To take something as useful as AI and use it for propaganda that can destroy the lives of others or defame their legacy and reputation is a poor use case. It is the lowest vibration use of AI that I have seen in recent history and it is coming from the most powerful human being on the planet.
As an educator and advocate for proper AI use, I just wonder where did we go wrong? It is not the children we need to reeducate but the adults. We need to train our political leaders on the harms that AI can cause because it is something that we generate or pass along. We can never be too careful about the messages that we are sending into the world which is largely digital. We are in an era where it is difficult for lay people to fact check what they see on social media and because they lack the skills to do so, they take everything at face value. This is dangerous!
While we have a nicely packaged AI Action Plan for America, I would like to see the part of this plan where we hold everyone accountable for producing generative AI images, videos, and music of the highest caliber versus attacking innocent people because we have the power to do so. At what point do we, the people, say, this is enough!
If it has to start with me, then so be it. THIS IS TOO MUCH! A change must come where we step back to evaluate the type of human beings that we put in leadership and hold them to higher standards. We, the people, need to step back from the need to gossip and be divisive, but instead to be more inclusive and loving to one another.
There are so many amazing use cases for AI that I cannot believe that our head leaders are limited to the vile petty behaviors of teenagers with no home training and without supervision on social media.
The good thing about this entire situation is that now we have a plan and perhaps we can hold the government to those standards.
For now, there are several parts of the AI Action Plan that President Trump has already violated according to OpenAI (2025). The prompt was "How does President Donald Trump reposting an AI-generated video of Pres. Obama being arrested in the White House and pacing in a jail cell in front of Trump as he smiles in violation of the AI action plan?" The action plan was uploaded with the question. The most important part is written below. These are the direct violation of the American AI Plan listed below.
📌 Contradiction 1: Misuse of Deepfakes in Legal Contexts
Plan Commitment: “Combat Synthetic Media in the Legal System” by developing forensic standards and guidelines to detect and reject AI-generated evidence intended to mislead courts and the public .
🔴 Violation: The video, even if framed as satire or political commentary, simulates criminal proceedings involving a former U.S. president (Obama). If it lacks disclaimers or watermarking, it could reasonably be interpreted as real by portions of the public—and that is exactly what the plan calls a national security and legal threat.
📌 Contradiction 2: Commitment to “Objective Truth” in AI Tools
Plan Commitment: “AI systems must be free from ideological bias… and be designed to pursue objective truth rather than social engineering agendas” .
🔴 Violation: Reposting an AI deepfake to promote a false legal narrative undermines public trust and shows selective enforcement of objectivity—particularly when the action comes from a political leader. It weaponizes AI as propaganda, contradicting the core ethical stance in the Action Plan.
📌 Contradiction 3: Public Trust in Government AI Use
Plan Commitment: “The tools must be trustworthy… prevent our advanced technologies from being misused or stolen by malicious actors”.
🔴 Violation: When a high-level political leader uses AI content to inflame, deceive, or manipulate, it blurs the line between public trust and political theater. This behavior sets a precedent that others may emulate, eroding public faith in the legitimacy of AI-generated content across society. (OpenAI, 2025)
Moreover, ChatGPT had advice on what to do next when leaders violate the AI Action Plan in the way that our President did recently. Here is that advice:
How to Hold Leaders Accountable for Violating Their Own Plan
Here’s how we can strategically use this incident to push for accountability, reform, and public awareness:
1. Issue a Public Statement or Open Letter
Frame it around: “The America’s AI Action Plan states XYZ. This behavior directly violates those commitments. We call for X consequence (removal, retraction, platform enforcement, new guardrails).”
2. File an Official Complaint with the Office of Science and Technology Policy (OSTP) or NIST
They are responsible for:
Deepfake forensics development
AI evaluation standards
Public trust promotion
3. Leverage the TAKE IT DOWN Act (2025)
Though originally designed to target non-consensual explicit deepfakes, this act is cited in the Action Plan as a starting point for addressing AI-generated harm. Consider advocating for its expansion to include political and reputational abuse of synthetic media. Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks (TAKE IT DOWN) Act," is a U.S. federal law aimed at combating the nonconsensual online distribution of intimate images, including those created using AI deepfake technology. It makes it illegal to knowingly publish such images and requires online platforms to remove them within 48 hours of a victim's request.
This is a start I would say. I also have one more call to action.
As a human being in a world being infused with artificial intelligence, I urge each of you to remember your humanity when using AI tools.
Remember that we have young ones watching us right now and modeling their lives from how we are.
Remember that AI can build us up as a unified country or tear us down, one community against another.
Choose to build us up...we are the UNITED States of America with an AI Action Plan that should be used by all Americans.
Comments