Don't get carried away with thoughts of where AI could go.

When you hear the term AI – one of the hottest buzzwords in many industries right now, including credit unions – what first comes to mind?

Is it something safe and familiar, like the targeted ad for a Miami Airbnb that pops up in your Facebook feed after you've researched vacation rentals? Or do you see something out of a movie – a city of self-driving cars with drones hovering above, where robots have taken on human roles in not only professional but personal settings (think a robot functioning as your personal assistant, therapist and maybe even lover)?

Recommended For You

News about artificial intelligence is everywhere, and while it's still in its very early stages of development, it's hard not to fall into a wormhole of where it might go in the future each time you're reminded of the topic. And it's safe to say the possibilities of AI are causing equal parts excitement and fear. Excitement for how it could help improve sales and lead to lucrative business opportunities, and free up lots of our time once it takes over all of our mundane tasks. Fear for all the people working in the jobs it may destroy, the potential accidents and cyber-attacks that may come along with it, as well as how creepy it's going to feel when people start talking to robots as if they're good friends.

At CU Times, our coverage of AI is narrowly focused on its current and potential impact on financial services, which ranges from virtual assistants taking over teller jobs to automating underwriting and compliance work. For credit union leaders exploring possible uses of AI in their industry, it's important to keep your thoughts "down to earth." In other words, understand where the technology is now and consider how you can contribute to its next steps, because at this point, what it will look like decades down the road is largely a fantasy.

Looking back at our recent coverage of AI and reports that have been funneled our way, there are a lot of conflicting thoughts floating around among consumers and business professionals. One headline that caught my eye in a recent email was, "Invoca Study Finds Consumers Want Brands to Understand Their Emotions in the Age of AI." This statement, which implies discomfort with AI, resonated with me because, well, how can you connect emotionally with a brand when the "person" on the other end of the phone or messaging window is not capable of real emotion?

For the study, call tracking and analytics firm Invoca, along with Adobe, examined the importance of a brand's emotional quotient ("EQ") and how AI might affect its EQ during consumer interactions. It found that most consumers do not want to see AI take over completely: More than half said the future should offer a combination of human and automated support, 61% said AI will make shopping less personal and 51% said AI will make it more frustrating. Millennials and Gen Zers have more faith in AI's emotional potential, however, with 54% of people under age 35 stating they believe AI will gain EQ in the next five years, compared to 45% of people over 35 (which makes sense, because thanks to apps like Tinder, many people under age 35 have already been downgrading human beings to disposable commodities that can be deleted or blocked with the tap of a finger, but that's a whole other story).

Juxtaposing the Invoca study was another recent one from professional services company Accenture, which reported 45% of consumers are more comfortable using AI than they were a year ago, based on a survey of more than 6,000 people in six countries. It also found 62% of respondents thought government was at least as qualified as the private sector to deliver AI-enabled services. In another study, this time from information provider Neustar, 82% of security professionals said they fear AI-fueled attacks against their organization with stolen data (50%) and customer trust loss (19%) being the top items of concern.

Clearly, our thoughts are all over the place when it comes to AI. We want to tap into its potential so we can do our jobs better, but we're also afraid it'll backfire by opening the door to new types of attacks. We want it to replicate humans so humans can spend their time doing more important things, but we know something will always be missing from the replicated human because it'll never be human.

And even if we do use AI to create digital personas or robots that behave very much like humans, is that something we even want? This question popped into my mind as I was editing an incredibly interesting contributed article from the Filene Research Institute's Elry Armaza, "Righting Course: The Credit Union Fix for AI's Unintended Consequences," which described how Microsoft's 2016launch of a  Twitter bot named Tay led to a disastrous (and somewhat hilarious) outcome:

Within 16 hours, Tay, shaped by the conversations it established with other users, began to post inflammatory and offensive tweets through its Twitter account, forcing Microsoft to shut it down. Tay was programmed to learn from the behaviors of other Twitter users, and in that regard, the bot was a success. The difference between Tay and many AI-based solutions is that Tay's results were transparent, and its human watchers recognized that even though it was operating "properly," the result was not socially desirable.

This example proves that first, a lot of people on Twitter are a-holes. Second, when we get technology to act like a human, it may not behave the way we want it to or the way we feel is "right" – just like with a real human. That means we either need to get AI to only impersonate humans with desirable, likeable, noble traits … or forget about using AI in certain human-like roles altogether.

The future of AI is a big unknown, and for now, we're probably better off focusing on its more realistic, near-term, "safer" applications. For credit unions, that includes exploring ways to use AI in fraud prevention, authentication, compliance and credit decisioning.

As someone who is truly fascinated by the technology changes that have taken place in the last 30 years and their impact on our everyday lives, I'm excited to sit back and witness what the next 30 years has in store for us, especially on the AI front. But when it comes to technology stepping into the realm of human emotion, we'll need to start setting some boundaries.

Natasha Chilingerian

Natasha Chilingerian is managing editor for CU Times. She can be reached at [email protected].

NOT FOR REPRINT

© 2025 ALM Global, LLC, All Rights Reserved. Request academic re-use from www.copyright.com. All other uses, submit a request to [email protected]. For more information visit Asset & Logo Licensing.

Natasha Chilingerian

Natasha Chilingerian has been immersed in the credit union industry for over a decade. She first joined CU Times in 2011 as a freelance writer, and following a two-year hiatus from 2013-2015, during which time she served as a communications specialist for Xceed Financial Credit Union (now Kinecta Federal Credit Union), she re-joined the CU Times team full-time as managing editor. She was promoted to executive editor in 2019. In the earlier days of her career, Chilingerian focused on news and lifestyle journalism, serving as a writer and editor for numerous regional publications in Oregon, Louisiana, South Carolina and the San Francisco Bay Area. In addition, she holds experience in marketing copywriting for companies in the finance and technology space. At CU Times, she covers People and Community news, cybersecurity, fintech partnerships, marketing, workplace culture, leadership, DEI, branch strategies, digital banking and more. She currently works remotely and splits her time between Southern California and Portland, Ore.