An artificial intelligence system recently passed the hardest financial adviser exam in minutes, looking like the smartest money manager in the room.
It feels like progress. But it should also feel like a warning. Remember the old Holiday Inn Express commercials? A regular person suddenly acted like a surgeon or scientist just because they “stayed at a Holiday Inn Express last night.” The joke worked because credentials matter. Or think of Leonardo DiCaprio in Catch Me If You Can, faking his way through roles as a pilot, lawyer, and doctor. He looked the part, but it was all an act.
That is where we are with AI today. It can ace exams and imitate expertise. But is it a real colleague, or just a Holiday Inn moment with nothing proven behind it? Passing a test shows knowledge, but it does not prove judgment, accountability, or integrity. That is why credentials are only the start of trust, not the end. It is like passing the written driving exam and being handed the keys before ever sitting behind the wheel.
AI is not just another app or software update. It is an intelligent colleague. It is the colleague you never interviewed, never trained, and cannot fire. Doctors now work with AI to read scans, and together they outperform doctors working alone. AI has scored 100 on the medical licensing exam. Lawyers have already been embarrassed in court when judges found their filings filled with fake citations from careless AI use.
HARNESSING AI TO MAKE AMERICA HEALTHY AGAIN
That means AI is in the room today, shaping diagnoses, lawsuits, and investments. Yet it has never earned trust face-to-face. Would you let a doctor who aced the written test but never treated a patient care for your child? Would you trust a financial adviser who passed the exam but never sat across from a client? Would you put your case in the hands of a lawyer who cleared the bar but never argued in court?
AI is not just another tool. It is a colleague in the room, and colleagues require oversight. AI passed the test, but it has never earned the license.
The danger is not that AI makes mistakes. The danger is professionals using it without diligence or oversight. Judges have begun treating sloppy AI use as misconduct. This isn’t the first time we’ve confused capability with trust, and the consequences are already clear. Too many people are leaning on AI like the Holiday Inn commercial. It makes them sound smarter overnight. But sounding smarter is not the same as being competent.
Blind trust in AI is malpractice. Every colleague requires supervision, correction, and training. AI should be no different.
The temptation is to say Congress must solve this. But history says otherwise. With Section 230, lawmakers handed social media companies sweeping immunity. That decision has fueled decades of misinformation, division, and mistrust. Section 230 has become a failed experiment.
Congress cannot repeat the Section 230 mistake of granting immunity without accountability. Passing an exam may show knowledge, but it does not prove readiness or responsibility. What Congress can do now is set a clear national baseline, so professionals and families are not trapped in a maze of conflicting state rules. The American Medical Association is urging caution, the Florida Bar is weighing guardrails for lawyers, and Illinois has already banned certain uses of AI in therapy. That patchwork creates uncertainty, not trust.
Real leadership must come from the professions themselves. Congress can provide minimum guardrails, but only the professions can prove that AI is safe to use. Doctors, lawyers, and financial advisers must become trust innovators. That means adopting a Trust Innovator Pledge — a professional oath every trusted field should adopt:
Diligence: Validate before use.
Oversight: Keep a human in the loop.
Education: Train professionals and the public.
Correction: Admit mistakes and fix them quickly.
This is not bureaucracy. It is a trust contract. If professionals hold themselves to these standards, families can feel safe that the unseen colleague in the room is accountable.
This debate is not theoretical. My own mother has courageously battled ovarian cancer for over six years. My family knows what it means to wait for scans that could change everything. One of my clients has developed an AI that can detect the earliest signs of cancer from a single vial of blood. With trust, that kind of innovation can save lives. Without it, mistrust will kill opportunity.
AI has passed the tests. But it has not earned trust. Washington can provide minimum guardrails, but only professionals can prove that AI deserves a place at the table as a true colleague.
TRUMP DIRECTS $50 MILLION TO AI IN CHILDHOOD CANCER RESEARCH
Would you bet your life, your case, or your money on someone who looks good on paper but has never earned your trust face to face?
The AI doctor is in. The question is whether we give this new colleague a stethoscope before it has earned our trust.
Bryan Rotella is the managing partner and chief legal strategist of GenCo Legal.