“Last Week Tonight” Covered AI. Here’s Our Take.

March 3, 2023, Rusty Walters


“Last Week Tonight” Covered AI. Here’s Our Take.

March 3, 2023, Rusty Walters

“Last Week Tonight” Covered AI. Here’s Our Take.

March 3, 2023, Rusty Walters

“Last Week Tonight” Covered AI. Here’s Our Take.

March 3, 2023, Rusty Walters

“Last Week Tonight” Covered AI. Here’s Our Take.

March 3, 2023, Rusty Walters

“Last Week Tonight” Covered AI. Here’s Our Take.

March 3, 2023, Rusty Walters

“Last Week Tonight” Covered AI. Here’s Our Take.

March 3, 2023, Rusty Walters



In this past Sunday’s episode of HBO’s Last Week Tonight, John Oliver tackled the recent explosion of AI adoption head on. While we’ll cover some of the points he brought forward, it’s worth the 27 minutes of viewing if you’ve yet to see it (keep in mind, it’s a late-night show on HBO, so maybe watch with headphones in at work).

Key takeaways from the episode

While ChatGPT has thrust AI front and center in the public consciousness, John does a fantastic job of pointing out that we’ve been living with AI for years. It’s true that once a piece of technology or innovation becomes integrated into our daily lives, we often forget what’s really powering it.

For instance, AI drives many of the recommendations in online shopping or on your streaming devices, and is responsible for many processes in your phone (facial recognition, predictive text, etc.). However, what is notable about the AI tech we have become accustomed to is that it’s largely what we would consider “discriminative AI.” In other words, it’s AI that is providing recommendations, decisions, or predictions based on training models – different from what ChatGPT delivers.

Open AI’s ChatGPT has put the concept of “generative AI” into the mainstream. Generative AI is comprised of AI models that are trained to build new elements based on its training data. Text, images, video, and audio can all now be created on the fly based on simple (or elaborate) prompts given by the user, as we explored in a previous blog. The business impact of these two types of AI working together is profound and has been covered ad nauseum across the web with potential use cases touching every corner of the customer experience.

While the endless opportunities for AI-enabled use cases are largely driving the narrative today, there is more that organizations need to consider than just applications of AI models.

Four critical areas to embrace for a successful AI strategy

  • Culture of innovation: Organizations who will thrive in the AI era are those who adopt a culture of innovation. They’ll need to leverage agile principles to rapidly test and adopt new technology that will drive operational efficiencies and improve the customer experience.
  • Clear areas of impact: Innovation needs to be directed to the most impactful areas before it can scale. To demonstrate quick value and scale, organizations will need to clearly map and define areas of high impact as they are implementing new AI use cases.
  • Data and technology strategy: Your organization’s differentiating advantage for building bespoke AI models to serve your customers is your data available for training and activating those models. A mature, well-governed data ecosystem is critical to drive AI maturity.
  • Organizational accountability: Ethical AI usage will require your organization to have a transparent approach to leveraging models and clear governance in place to eliminate bias wherever possible.

Navigating the ethics of AI

The concept of transparency and explainability is the driving force behind John Oliver’s segment, where he leads his monologue into some of the treacherous downsides of AI adoption, specifically around the lack of transparency within many algorithms. This is an area that we at Merkle also strongly believe is critical for companies to get right out of the gate through adopting ethical and explainable AI processes.

From things like “AI hallucination,” where AI models seemingly create facts from thin air to AI building implicit bias into its recommendations, there are very real ethical and legal implications for organization that do not take these risks to heart.

Recent regulations coming out of the EU are starting to get ahead of AI transparency risks. By establishing guidelines around areas deemed high risk for AI bias (public service, healthcare, employment), the EU AI Act aims to provide specific guidelines on how AI can be leverage in specific industries.

As more regulation is undoubtedly coming, we believe it’s in organizations’ best interests to adopt a mindset of hyper-transparency when considering future AI-driven use cases.

What’s critical to keep in mind is that while it may seem like the future is here with recent AI advancements, we are still very early in the progression. Today’s AI is still considered narrow in focus, reliant directly upon human input. The self-aware AI that we see in movies like Megan or Her is not here yet, but it’s on our doorstep and we need to prepare.

I think John Oliver summed up his segment perfectly with the reminder that “AI is ultimately a mirror, and it will reflect back exactly who we are from the best of us to the worst of us…” It’s ultimately up to all of us as to how that mirror reflects.

As always, we would love to speak with you on how we are working with our clients to prepare for this revolution.

Want to learn more? Check out our ebook on ethical AI, which is a great primer to begin approaching explainability in AI strategically.