How AI is Impacting Fintech Today

We spoke with AI thought leader and technologist Sampad Das to discuss the risks, opportunities, use cases and realities of using AI in the modern fintech landscape.

FintechWomen
5 min readOct 3, 2023

Sampad Das is a tenured thought leader in emerging technologies. He has delivered innovation to financial firms for over a decade and understands firsthand the impact of AI, blockchain and IoT across financial services. Das joined FintechWomen to offer his take on current state of AI in fintech.

Q: AI has obviously been in the headlines for a while now, and the discourse has been relatively split between available opportunities and the risks associated with artificial intelligence. How can companies and individuals effectively balance possibility with these very real concerns to make the future of AI a net-positive?

We’re already seeing some of the positive opportunity AI has to offer. It plays a significant role in the fintech industry by automating processes, enhancing decision-making and improving customer experiences. As consumers and practitioners, we’ve become familiar with seeing AI in areas like customer service, the automation of repetitive tasks (such as data entry, fraud detection, document verification and reducing operational costs and errors) and the personalization of financial product and service recommendations to meet individual customer needs.

That said, I think AI’s unlikely to replace people entirely, based in part on some of the risks we’ve seen — namely literal and creative biases that stem from existing inequities, homogenous perspectives and feedback loop bias that reflect our own. In short, existing models aren’t foolproof, and human intervention will be required to maintain both fairness and accuracy. Instead, AI works best when it augments human capabilities, enabling professionals to focus on higher-value tasks like strategic planning, relationship management and complex problem-solving.

Human oversight remains and will continue to remain crucial for regulatory compliance, ethical decision-making and addressing unforeseen situations. Going forward, AI and humans will likely work in tandem to optimize fintech operations and enhance customer experiences.

Q: The fintech industry has been using automated and intelligent technologies like robotic process automation (RPA) and dynamic chatbots for a while now. How is emerging AI technology different, and what are some of the use cases we can anticipate in the near future?

Intelligent technologies such as RPA and dynamic bots are rule-based. They follow preconfigured rules and instructions and do not make decisions based on data or learning. In general, RPA is designed for repetitive work, and the technology excels at automating tasks like invoice processing and field or form-filling.

Conversely, AI automation typically requires a level of human intelligence, such as problem-solving, interpretation, image recognition and decision-making. AI systems learn from data and adapt to new situations. In this sense, they can be used for a wide range of applications — including natural language processing, medical diagnosis, computer vision, autonomous vehicles, recommendation systems and many more.

Q: As the industry begins to work with AI tools, it’s becoming increasingly clear that human factors are equally as important as the technology itself. How can we work more effectively with the tools at-hand, and what kinds of approaches and inputs yield better results?

As I’ve said, we’ve identified some inherent and learned biases that have been built into existing AI models, so we’re seeing a new field called “Explainable AI” or “XAI” emerge. This enables us to integrate the human factors with AI technology by formalizing approaches that allow human users to comprehend and trust the outputs of machine learning algorithms. It’s not a silver bullet, but it does bring a new perspective and provide some solutions for identifying potential AI biases.

Organizations must be proactive and thoughtful throughout the machine learning pipeline and train their people to support this effort. Fortunately, there are some tangible steps we can take, such as ensuring diverse and representative data to drive training — as well as auditing and reviewing input and output data. As we build and use these algorithms, we can also search for hidden biases or assumptions to address the problem where it lives. This requires us to leverage inclusive teams, as well as techniques like adversarial debiasing and differential privacy to reduce discrimination.

Simply put, organizations need to continuously assess and monitor their AI systems throughout their lifecycle to proactively correct and retrain these solutions to avoid skewed outcomes.

Q: What do companies and individuals have to consider when implementing AI into their process? More specifically, what are some of the strategic, logistical and risk-related considerations the industry needs to account for as we move forward?

For companies to successfully implement AI solutions, they need to consider a number of factors:

  • Alignment with Objectives: AI projects must align closely with the organization’s strategic goals and objectives — ultimately contributing to the overall mission and vision of the company.
  • Fairness and Ethics: AI systems must be developed and used ethically, fairly, transparently and with the previously-discussed bias mitigation measures in-place.
  • Long-term Orientation: A successful AI implementation is not a one-time effort but an ongoing process; this often requires continuous evolution to meet changing business needs and technological advancements.

It’s also worth noting that the biggest challenge with implementing AI is often the tech itself. The decision-making processes behind many AI systems is opaque and (occasionally) inexplicable. It is important that teams take proactive steps to understand how these AI models arrive at their conclusions and take measures to identify logistical flaws present in leveraged training data.

If organizations can account for ambiguity or shortcomings at the outset, they’ll be better positioned to use and trust their AI solution.

Q: What are your hopes for the future of AI in fintech?

We’ll likely see some standards and regulation to prevent negative uses and trends, such as misinformation or deepfake content, security vulnerabilities and an overall lack of algorithm accountability. However, with the right strategies and guidelines, the potential of AI in fintech is virtually limitless.

In the next to three to five years, we’ll likely see financial institutions using AI to provide more personalized and affordable services to their customers, expand financial inclusion, reduce fraud and improve risk management across the entire ecosystem.

Want to advance your fintech knowledge? Subscribe to our mailing list to receive our monthly newsletter and stay up to date on the latest insights, events and opportunities from FintechWomen.

The thoughts expressed in this article are Das’s alone, and they do not reflect formal advice or opinions from his employer, PwC.

--

--

FintechWomen

We bring diversity to fintech—by & for growth-minded women and allies