From Big Tech to Big Law: perspectives on new horizons for generative AI
If you are skeptical about the nebulous field of Responsible AI or think that regulation only slows down innovation, this blog post is for you. It is my sincere attempt to reflect on the multi-dimensional questions at the intersection of legal risks, access to justice, and AI innovation.
It has been a decade since I started at the Think Tank innovation team at Samsung Research, located across from NASA Ames in Mountain View, CA. I still remember the HAM amateur radio prototype we put together which allowed us to listen in on NASA’s radio across the highway. I had just wrapped up a fellowship and lived on that same NASA campus for some time, and it all felt like home somehow.
Today, it has been nearly a year since I started at the Responsible AI Testing team at DLA Piper, which is part of the firm’s new Artificial Intelligence and Data Analytics Practice. Responsible AI innovation is the breath of what we do at our team. More concretely, I’m excited about building agentic AI systems evaluation pipelines through legal red-teaming, standardization, and mechanism design.
I want to take a moment to reflect and share some perspectives about how this journey has shaped my thinking. In summary:
- Blue Sky thinking is your best bet and rapid prototyping is a meaningful way to explore new possibilities.
- Sustaining motivation — a diverse, open, and supportive culture inspires deeper questions about why things are as they are. It takes conscious effort to sustain and is the table where the “eats strategy for breakfast” happens.
- Human-centered everything is key, recognizing that we coexist within complex socio-economic, socio-political, and socio-ecological systems, and points of friction are opportunities to invite change and create value.
Science fiction has long been a driving force behind today's technological innovations, turning imaginative ideas into real-world breakthroughs that shape our future. Both my earlier experiences across the street from NASA Ames and my work today have been tremendously inspired by the power of relentless and radical accounts of wonder and imagination.
I’m extremely proud of our DLA team’s recognition with a 2024 Financial Times award for service innovations in generative AI, including legal red-teaming and proactive compliance-as-a-service. We just wrapped up the year with a public seminar on red-teaming AI systems in healthcare. Blue Sky has been a common thread, including responding to conference calls that have resonated with our work — the Blue Sky track at the International Conference on Autonomous Agents and Multiagent Systems.
At Samsung, I led the machine learning development for a project where we used the unique signature of the tissues and blood in your body to encode and decode sensor data from your phone or smartwatch. The AI models I was building were inspired by researchers' approach to looking for aliens at the time. The biggest difference was that the antenna on our prototype was not as powerful and I was looking for patterns in the lower part of the spectrum. I know that because I was a regular attendee at NASA conferences and was fascinated by their quest to understand the origins and prevalence of life and intelligence in the universe. Although using the blood and tissues of our bodies to store data securely is not our current reality, the idea was genuinely inspiring and led to a few inventions that may prove helpful for entirely different purposes.
On naming things and epistemic flexibility
The quest for understanding what intelligence is and why a fairly simple deep learning model fails differently for different kinds of people led me to ask entirely new categories of questions. Understanding the failure modes of AI models was at its nacensy and I was lucky to be at the forefront through coming across the Fairness, Accountability, and Transparency of AI (FAccT) workshop at one of the most prestigious machine learning conferences — NeurIPS. Interestingly, the same conference recently went through a name change, previously called NIPS, while FAccT which is now a leading conference in the Responsible AI space, started as being called FAT ML, which led to FAT*, which eventually became FAccT. Naming and re-naming through a critical approach is both challenging and a natural part of learning and growth.
Responsible AI, AI Ethics, AI Governance, AI Safety, Radical AI, and many other concepts have been muddled in the ecosystem of researchers and organizations working on understanding and mitigating the complex limitations of AI systems. In my experience working in a technology company innovation team, one way out of the misunderstanding that names can cause is a demo. Rapid prototyping, an iterative approach to building things, co-designing with diverse stakeholders, engaging end-users, and centering the margins of who is considered an end user was truly beneficial. To do that in sustainable ways, however, organizational culture and structure needed to change as well in order to enable more meaningful forms of participation and discourse among diverse worldviews and perspectives.
Building good tools for other people to use
With Generative AI, we have infinite possibilities to create without the end in mind. Beyond anything, AI and any technology remain a tool that can be used for good and for bad. Product-oriented teams within technology companies are accustomed to human-centered user research informed by a number of research fields such as Value Sensitive Design and Anthropology.
In contrast, within an innovation team, we were tasked to expand on the idea of human experiences created by the AI-driven technology we were building. We didn’t have any user researchers or product leads on our team. The goal was not to solve any specific problem but rather to create experiences, while there was little time we could spend on what problems might such new experiences cause.
A tool can only be as helpful as its user’s ability to use it to accomplish what they need to accomplish. In creating new kinds of experiences, the users themselves may not necessarily know what they want, which makes many old user research paradigms fall on their head. Ultimately, however, it is people who would either decide to use a tool or not.
Within our work at DLA Piper, the tools we build aim to empower attorneys and data scientists to be better equipped to use generative AI effectively and safely. A human-centered approach helps us bring diverse perspectives that can inform critical decisions along the AI lifecycle. From building, development, deployment, monitoring, and regulation — many different kinds of humans may be considered intended or unintended users of an AI system. Our ability to rapidly prototype, evaluate, and integrate new kinds of feedback loops within human-AI interaction cycles is critical in driving adoption and scale in safe and responsible ways.
Falling in love with friction
From research to innovation to evaluating the legal risks — it is all people all the way down — people who are incredibly skilled at what they do. They may not have often worked together in the past, however, I think the more space there is for collaboration, the more we expand the horizons for what’s possible.
As Huggy Rao and Robert Sutton write in The Friction Project — “friction is terrible and wonderful,” and plays a critical part in how smart leaders make the right things easy and wrong things hard. Learning from decades of research and practice of friction fixers, we can turn to the growing field of AI Safety and ask: What if we could make the safe things easier and the high-risk things harder?
As generative AI technologies mature and get better integrated across different industries, we will be much better off being able to rely on strong networks of cooperation that center assurance, equity, and access.
I hope these perspectives inspire you to think outside the boxes of disciplinary boundaries society imposes on us and can’t wait to hear from you. What have your experiences been at the intersection of AI innovation and legal risks? What questions expand your views on transforming friction into more meaningful human experiences? Thank you!