The Legal Department

AI Fundamentals: What The Legal Department Needs To Know About Artificial Intelligence With Alya Sulaiman Of McDermott Will & Emery

LEGD 2 | AI Lawyer

 

Artificial Intelligence has undeniably transformed so much of how we do things that we can only conclude it is here to stay. For the legal profession, this means it is only a matter of time, if you haven’t already, for us to confront and accept this added dimension to our work. Our guest in this episode already foresees that every lawyer will someday become an AI lawyer. Alya Sulaiman of McDermott Will & Emery joins Stacy Bratcher to give us the basics of AI, demystifying this technology that is shaking up our lives. Alya is one of the legal thought leaders in artificial intelligence. She breaks down AI for in-house counsel as well as other ways that AI can help you in your legal department. Showing the ropes of how AI shows up, especially in healthcare, Alya then tackles privacy and data stewardship obligations as well as liability frameworks that lawyers must take into consideration. The legal profession may have gotten quite complex with the advent of this new technology, but it holds promise for the necessity of what we do. Tune in as Alya takes us deeper into the intersection of AI and law.

Listen to the podcast here

 

AI Fundamentals: What The Legal Department Needs To Know About Artificial Intelligence With Alya Sulaiman Of McDermott Will & Emery

I’m excited to talk with Alya Sulaiman, a partner at McDermott Will & Emery and one of the legal thought leaders in artificial intelligence. She’s going to be giving us some basics of AI for in-house counsel and ways that AI can help you in your legal department. She’s even going to help us see if AI has any recommendations for a great pump-up song. This is an interesting conversation. I’m hoping you’re going to feel the same way.

 

LEGD 2 | AI Lawyer

 

Alya, welcome to The Legal Department. I’m excited about this conversation. I’ve been thinking about it for weeks now. I feel like lawyers of any ilk, especially in-house, can’t go anywhere without being tripped up and asked about AI and what you think about it. I’ve been doing my best to pantomime through those conversations.

I’m excited to be here.

You speak a lot on this topic. I know you were at Harvard Business School and the much-anticipated health conference, which I want to hear a little bit about. I’m hoping we can get from this conversation maybe a few key things that in-house lawyers may need to know about AI. I feel like you’re a thought leader in this area for lawyers. Can you tell us how you got into that?

I’m excited to be here and have the conversation. Demystifying this stuff is part of what motivates me to go to these conferences and write about this topic. I’m excited to dive in and hopefully do that. I’m a product counsel. I help people take their ideas from a light bulb to launch, and then keep going as they deploy their products into the wild. In healthcare, in particular, data is a key driver for many digital health products and applications.

In healthcare in particular, data is a key driver for many digital health products and applications. Click To Tweet

I am a privacy lawyer by training. Oftentimes, especially in my last job where I’m in-house at Epic, the global EHR company, which is a flat organization. Meaning any of the thousands of software developers at the company could call me directly at any time and ask me a question. More than once, I’d get a call from a creative and ambitious young developer who would say, “I’ve got this incredible idea. I have this algorithmic model based on this journal study that is more effective at printing someone’s likelihood of developing breast cancer. I’d love to build out the model. All I need is access to 20 million people’s data in order to prove it. Any issues with that?”

No problem. Can we do that next week?

At Epic, it wasn’t even that you needed to be a “yes-if” lawyer, you needed to be a “yes-and” lawyer. My job was to figure out the path forward to enable and support good ideas in a way that was respectful of the privacy obligations and data stewardship obligations the company had as the provider of electronic health records to the vast majority of major health systems in the US. Privacy was trayed into artificial intelligence.

Analyzing the data that’s available, the rights associated with an organization’s ability to use that data, and the appropriate privacy protections that needed to be in place across the AI or machine learning product development lifecycle was how I got steeped into all the other regulatory and transactional intellectual property issues that come with developing these complex interesting, but incredibly impactful tools.

Privacy is a core skill of most in-house lawyers, especially folks that touch healthcare, but there’s so much more. As a health lawyer, I instinctively go to privacy as the main area that I need to know about, but it’s much more than that. If I’m an in-house lawyer, what’s my number two after privacy?

Product counseling fundamentally comes down to setting expectations and carefully crafting the responsibility split between the product that someone is developing and the buyer of the product, the deployer of the product, and the end users. You can’t do that without understanding liability frameworks. Frankly, the second most important knowledge point that I used in my product counseling is the idea of who’s responsible if something goes wrong. Both under regulatory frameworks and common law liability frameworks. We’re talking tort law, negligence, and product liability. Having clear answers on that to the extent possible.

The next question at the tip of my tongue is, how do we know that? I was at a meeting where we were talking about AI having such potential to help clinicians make their jobs more efficient, reduce burnout, the automated soap notes, and ambient AI. This doctor asked a question, “I’m not sure who’s liable, but I’m pretty sure it’s me if there’s an error.” I don’t want to limit it to the healthcare context, but how do we know what that liability framework is when we’re dealing with an AI tool?

It is a complicated landscape. There are so many players and so many doctrines and interactions between the various players and doctrines. I like to joke that just like we’re seeing big tech companies race to have the greatest and best foundation large language model, regulators, and policymakers are also racing to develop the rubric for who’s responsible for these technologies across the life cycle.

Determining who’s responsible comes back to a fundamental step that any in-house lawyer who’s being asked to advise on this needs to take before doing any level of legal analysis on the actual AI tool. That fundamental step is, “Define the problem that you are trying to solve and what it looks like to succeed in solving that problem. What does it look like for your AI solution to work?”

That’s not something that an in-house lawyer is going to independently be able to answer. In my experience, in-house lawyers have a critical role to play in shepherding teams through answering that question. Sometimes that means realizing that, “Based on the data that we have to use and the technology that we have to feed it into, there’s a limited lane where this tool is going to be helpful or where this tool is useful.” That’s okay for now. Lawyers can play the role of encouraging people that that’s okay for now and maybe that’s our minimum viable product to get started with, but let’s be clear on where the guardrails are because without defining what it means for this to work, it’s hard to answer the liability-related question which is, “How could it fail and for whom?”

The first step is looking at whatever AI solutions are being put on your desk as the person to get the contract done or whatever. What are we trying to solve? If this were to be implemented, what is the liability framework we’re looking at? Who could be harmed by this? Is that what you’re saying?

Exactly. The answer of what does it mean for this to work? That’s a harder question to answer than many teams realize. My bias is in healthcare so a lot of my examples are going to be healthcare-based. If I’ve got an AI tool that is a predictive risk model that says, “Here’s someone’s likelihood of developing diabetes based on what we know about their current health status, their diet, and their activity level.”

If I’ve only trained that model on patients in the Omaha, Nebraska area, I should go with eyes wide open to the potential shortcomings of that AI tool if I try to deploy it in Southern California or maybe in Los Angeles County. There may be attributes specific to the patient population that make it a lot harder for that model to operate on an accurate and reliable basis when thrown into the wild with an entirely certain population.

I hate to be this way, but it makes me feel a little scared. There’s such a frenzy around this technology being the next internet, let’s just say that, and all the promise. The geographic bias is a great example, but there’s another bias. If your database is only getting data about a certain type of person or a certain ethnicity, it could be a situation of garbage in, garbage out. You’ve made an investment. If you’re relying too much on this technology that’s based on data that may be imperfect, what are we getting? It’s not a legal question, but as you’re talking, the hair is standing up on the back of my neck.

The “What are we getting?” question is the question I likely spend the most time, helping folks get their arms around in private practice these days. Meaning, organizations that are realizing that the way that they assess digital health tools and vendors of those tools needs to change a little bit when it comes to evaluating AI-based tools and vendors. The questions that you need to know require purposeful transparency on the part of the developers of these tools.

To me, purposeful transparency means they’re not just giving you pages and pages of white paper about how great their tool is, but they’ve actually done some meaningful analysis on when their tool does well. They have an awareness of its limitations and when it doesn’t do so well, whether that’s with certain populations or in a particular context. Maybe their tool, based on the outputs, needs to be reviewed by someone with a particular educational background or training level.

This strikes me as an area where in-house council could help teams do some diligence. Everyone is super excited about whatever solution, but maybe a takeaway for the audience is for the in-house team to try to lead some of these conversations to get some of this information before you commit.

That would be a great step for folks to take. There are a range of template contract clauses and diligence questions that are emerging. We don’t have the due diligence questionnaire that’s industry-standard for AI tools yet, but there are a lot of folks out there with a lot of good ideas on key things to ask. There are emerging ideas in the industry, like a nutritional facts label for AI.

If they ask for a standard set of information across vendors, you can actually make an apples-to-apples comparison if you’re considering two vendors with similar solutions. Think about it as like, “Here are the ingredients. Here are the data elements that we considered predictive for those tools. Here are the uses and directions for use. Here are the warnings, ‘Do not use it in these following contexts. Don’t attempt to rely on it to do X.’”

I think we have a lot to learn from nutritional facts labels. There are lots of interesting mock-ups that I’ve seen that try to apply that concept to different types of AI, whether that’s a use in the HR context, which is another huge area where we’re seeing AI tools deployed or in care delivery, revenue cycle management, operations, scheduling, you name it.

From a healthcare context. When we talk about using AI and rev cycle, I’m like, “All day long.” No one is getting hurt by that. I see that you’d want to take the load off of doctors, but you hate to have there be a hallucination in that ambient note that leads to the wrong medication. Helping folks on the front end, do some diligence, I’m going to be waiting at the edge of my seat for the diligence checklist. That’s an amazing thing if and when people like you develop that, I’ll be in line. Let’s keep with the liability theme because I think that’s what most lawyers are worried about. IP, yes, but that’s a downstream problem. I’m most worried about whether are people going to get hurt by this somehow.

We talked a little bit about privacy and I agree that AI legal issues are about more than privacy. At the same time, that is central to the development of these tools. Frankly, the vast majority of responsible AI harms that I’ve seen can be traced back to the characteristic data sets. While privacy is part of the equation, data governance more generally is one of the most important risk mitigation activities that an organization can do. They are evaluating AI tools or maybe they’re deploying an AI tool using their own data as the training or proving ground.

Let’s talk about that on a granular basis. Is data governance for AI different from data governance more broadly? I’ve been at organizations where data governance feels very overwhelming. It’s like a major cultural thing. There are political parts to it. There are ethical parts to it. Is AI governance a little different or is it part of that broader painful process?

Traditionally, data governance has been focused on being privacy-protective, transparent, and making sure that you’ve got a legal basis to do the things that you’re doing, that you’re doing things in ways that align with industry standards and consumer or patient expectations. Data governance for AI is different in the sense that what’s important is doing all that stuff, but also taking a moment to pause and reflect on the quality and the nature of the data that you have. Not all data is created equal. Some are messy. Some are rife with errors. Some are not standardized, poorly labeled, unstructured, and would take a lot of work to use. All of those things can impact how effective that data dataset is in either developing or configuring or AI model.

Basically garbage in, garbage out.

Yes. There’s an emerging standard that’s being developed. There’s some research that came out of a group called The Ether Group, which is a coalition of folks from Microsoft and some leading industry and academic researchers. They created something called Datasheets for Datasets. It’s essentially a list of questions to ask about data sets to help get your arms around what anomalies may exist in a data set that could lead to anomalies and AI model performance. The questions that that framework asks are, “Why did we collect this in the first place? Why was this created in the first place? It probably wasn’t collected or created to feed the AI tool you bought. Who is this data about?”

These folks sound open-source. Is Datasheets for Datasets something that is available that we could write a link to? Do people have to buy it?

It’s freely available online and it’s accompanied by an excellent journal article that explains the methodology.

I haven’t seen a ton of this yet where I am, but as this momentum grows and the pressure to do something and have some solution that’s an AI solution in your organization, nobody is looking around for these issues. The in-house counsel has an opportunity to bring order to chaos.

Frankly, in every in-house experience I’ve had, a hallmark of each of my roles was being the tech manager to help the organization think through these tough questions. Even identify that these are questions the organizations should answer before pushing ahead full throttle to experiment with the newest and shiniest thing on the market. In-house counsels have a critical role to play in surfacing these questions and trying to facilitate a meaningful conversation.

One thing I wanted to mention and this is something that I get different reactions to, but I have to evangelize about it because I believe in it. Organizations all over the place are setting up cross-functional committees to be their AI committee address these issues about these issues. The model is that you get someone from your CIO’s office, someone from your product or marketing office, someone from your leading business unit, or a lawyer in a room. They’re going to put their incredible expertise together, define a path forward, and come up with a good AI governance strategy.

I don’t think that works. I think what actually works is putting that cross-functional committee together and investing in a foundational education for the whole committee on AI, including helping them understand how AI impacts their respective colleagues’ specific areas. It seems ridiculously inefficient, but I literally think that the lawyer understands the technology and the product people understand the legal and regulatory frameworks. That cross-functional learning is critical so that the committee speaks the same language. They can actually teach their knowledge base and apply it collectively to the very fine-grained problems that they’re going to need to solve and answer in order to support the organization’s AI deployment. That’s my take on cross-functional committees.

That’s a concrete tip for the audience. With this momentum that’s coming, without folks being on the same sheet of music about this technology, the data we have, how we’re going to use it, and how it impacts all of us, it could set up some tensions between different roles even if you are all on a committee, if you don’t understand all of these different aspects. Where would we get that education?

I am always happy, especially in the digital health and health AI space, to be a sounding board for folks. The other thing that I want is an incredible amount of content coming out every single week about AI, it’s in every New York Times and Wall Street Journal headline. It’s the topic of every content I’m reading.

It’s overwhelming.

One thing I read recently that I thought was excellent and highly practical was a short book. It’s accessible for even our lightest readers in the audience. It’s called Ethical Machines. It’s by Reid Blackman, who has a PhD. It is a practical explanation of how we, as people in the business world, should think about ethics in terms of AI evaluation, deployment, and creation.

This whole idea of, “What do you do from an ethical perspective when your AI misbehaves?” is something that Reid breaks down from a practical perspective. He dispels the notion that ethics is a squishy abstract thing that we pay lip service to or it means taking our brand values and applying that to whatever we’re going to do on the AI front. It’s about responsibly using AI through a lot of organizational education. He does a good job of explaining the key concepts in this book. That’s one resource I’d highly recommend.

One issue I’ve been thinking about is our workforce challenges. How can I, as an in-house lawyer, use some AI tools to make my job easier and make me more efficient?

I’d love to talk about that. That is a small area of passion for me. I’m not just somebody who likes to read and talk about these tools. I love to use them and experiment with them. I will say that I strongly believe that every lawyer will be an AI lawyer in the next few years because it is going to become an issue that touches all different specialties, but it’s also going to become an issue that impacts how we do our work on a day-to-day basis. There’s incredible potential for these tools.

2022 was the coronation year for generative AI and we are now starting to see its real-world applications. I’m going to talk about a couple of ideas for in-house counsel in a second. The key thing is that there were these things that we thought only human creativity could contribute to and human ingenuity could create. All of a sudden, we’ve got chatbots and content generators making images, videos, and drawing articles that are well put together as something that even a trained expert human could put together.

In summary, it is a foundational new infrastructure for how we, as humans, communicate and how information and content are generated and shared. There’s this tipping point, “Is this like the internet?” I actually think this is probably more profound than the internet. We are so lucky to be on the ground floor. Frankly, all the AI tools that we’re using now are the worst AI we will ever use in the rest of our lifetime. That’s what I like to think about.

That’s a silver lining to feeling like you have no idea what you’re doing. it’s only going to get better.

Workforce challenges, being what they are, I have been experimenting with lots of different interesting uses for certain large language models. It takes a lot of time to negotiate a contract. It often involves organizing and interviewing red lines and then documenting counter-arguments, your responses, and times if you’ve got limited time to prepare for one of those negotiations. It can be a little overwhelming to prepare to go into it.

I’m worn out even hearing about it, and remembering back and forth, tracking the red lines, and making sure I have the right version. Did I catch all your comments?

There are lots of cool AI solutions that are cropping up that are add-ins to Microsoft Word that automate, the checking of definitions and making sure all of your cross-references are right. It doesn’t require any special Microsoft Word or macro skills or special formatting like the olden days. There are AI tools that do that for you. Contract Companion is one that I was introduced to recently. I’m not an endorser of Contract Companion, but if you’re out there, folks from Contract Companion, I’m a fan.

There are tools like that, that are worth investigating and potentially licensing because of the time savings of not having to worry about all those minutiae. Something that’s maybe even more interesting and accessible, you could use the free version of any enterprise large language model to do this, ChatGPT, Google Bard, or any equivalent model. You can create a negotiation table for a particular clause in an agreement. What do I mean by that? If there’s a clause that I’m going back and forth with the council on, there’s not necessarily a meeting of the minds to see the language go. You can use ChatGPT to essentially identify the key issues associated with the clause and help you understand arguments for the change that you want to make.

Literally, I would type into a prompt, “I’m an in-house counsel for an organization. My vendor is trying to add a termination for convenience clause to the contract. I want to object to it because my team is going to be relying on this tool. We can’t run the risk of this vendor terminating for convenience and leaving us high and dry without the solution. Can you outline documents as the customer as to why I can’t accept this termination for convenience clause? Also, outline the counter-arguments that the vendor is likely to use. Propose some alternative language that might address some of these concerns.”

I would laugh too, except for the fact that with some careful prompting, it’s not going to reeducate you on the issues associated with the termination for convenience clause, but it will give you a pretty nice checklist of what are the issues that could come up and a couple of good ideas to potentially respond to those issues, including some contract. It’s going to do it all in two seconds. That two-second part is the magic for someone who is running short on time, managing a busy to-do list, and pressed for resources. That’s just one idea and something that I feel like I need to do a LinkedIn live demo on how to use ChatGPT.

That example is that it gives the lawyer control. One of the things that makes me nervous, I don’t know if it’s other people’s, but having some technology that’s running without human ability to interpret and put the brakes on. I like the idea of being able to run a simulation, and then still have to use your legal ability to interpret what the machine came up with, and then you could use it or not use it. It gives you choice and authority over the technology. I like that.

The key use case that’s interesting and totally along that vein is brainstorming. It’s a brainstorming partner. Rather than Googling something and having to filter through lots of different things, you can try your hand using one of these enterprise large language model chatbots, and then see what comes out. To your point, it’s up to you whether you use it or not, but I’ve found more often than not that there’s maybe 1 issue out of 5 where it’s like, “I didn’t think about that.”

AI is just a brainstorming partner. Click To Tweet

Also, it was two minutes, so no love lost. One more AI question before I want to get into a little bit more about you, your professional trajectory, and what drives you. I’ve seen a lot about having an AI policy for your organization. I’m wondering, is that a must-have? Should we have already had one of those and we didn’t know it? How do we go about developing one?

I have an unpopular opinion about AI policies. It’s that broad spectrum, one-size-fits-all AI policies often fall short of addressing the complexities of most AI applications. It’s one thing to say, “Here are our organizational principles and priorities. We expect you to keep that in mind when you’re using AI and only use AI responsibly.” It’s one thing to say that versus to say, “ChatGPT is a terrible calculator. Do not use it as a calculator. Its accuracy rate plunges off a cliff if you try to multiply numbers at more than one digit.” That’s a lot more memorable to folks and they remember, “If I’m planning on playing with ChatGPT, I’ll try to use it as a substitute for my Excel formula.” That example is meant to illustrate the fact that people seem to do a lot better with specifics.

Don’t you think that’s across the board? Anything you have that’s more open, values-based, and like, “We’re going to all do the right thing,” people need a step-by-step. They need a checklist of what they can and not do.

I am a big proponent of if you’re going to do an AI policy, to exactly your point, give folks a step-by-step. Say, “Here are the use cases that we feel okay with. It’s generating blog posts for the company website. It’s taking some marketing text and asking a large language model for linguistic help to make it more persuasive or punchy.” Listing those use cases in a centralized place that people can reference, and then having an intake process for other use cases can be reviewed by that all-star cross-functional committee that’s all trained up that we talked about. That’s a much more effective approach to AI governance than one of these use case agnostic abstract policies or principles framework.

That’s my take on AI governance. The cool thing is that you give folks or start to create for folks a roadmap and different ideas for how they could use this stuff safely. If you don’t have an enterprise to use one of these tools, you can make clear that confidential information or privileged information shouldn’t go in the tool. Having that list-approved use cases ends up actually encouraging people to continue to experiment with these tools about ways to save time more effectively. It’s a broad spectrum enabler to take more of a specific use case-driven approach.

I also like your suggestion to have a process so people know where to go if they have other ideas because this is such a dynamic and evolving technology and we don’t know where it’s going to go. If you have something that’s too static, it’s not going to keep current. I like that idea a lot. You left Epic, an in-house job that many people would like. Although, on my notepad, a couple of things jumped out at me. The fact that anyone in the company could call you, I’m exhausted even hearing you say that. How did you manage to have the biggest open door or the widest open door?

The Epic legal team is really lean but so effective. The thing I miss most about Epic is the incredible colleagues that I worked with every day to navigate some of the most cutting-edge issues in healthcare. A lesson that I learned very early on at Epic was the lesson of creating self-service tools and resources for folks in the company to reference so that basic questions like, “What is HIPAA?” were things I didn’t have to spend time answering on a call.

I could point someone to an internal wiki or other resources, and then reserve my time for the complex interesting questions that made it so fun to work for a company like Epic operating on a global scale. Self-service resources to the extent that there are tools you can create for your business teams to essentially let them guide themselves safely through certain issues or processes is always a win in my book.

Self-service resources are always a win to the extent that there are tools you can create for your business teams to essentially let them guide themselves safely through certain issues or processes. Click To Tweet

I have other colleagues who do that. I aspire to do that. You described you had great colleagues and interesting work. Why go to a law firm? Most people make the change in the opposite direction. What drove you to McDermott?

I knew of McDermott’s reputation in healthcare and as a supporter of innovation from being in the industry. I was interested in looking at issues on the technologies actions and the digital health product and regulatory perspective from different angles, four different entities in the healthcare industry. Epic is incredible everywhere and a huge powerful driver in digital health, but they’re one player in a very complicated landscape of provider organizations, health plans, digital health companies, and trade associations that are all shaping how we receive care and interact with the healthcare system and how the people love receive care and interact with the healthcare system.

I was interested in looking at these issues from different perspectives. McDermott has delivered on facilitating that experience. It is an incredibly diverse group of attorneys that touches every single angle in healthcare. It is incredible to the depth of the bench and the expertise here. It’s made for a lot of fun and a lot of novel issues to crop up day-to-day and week-to-week. That was my primary driver. Most folks who work for Epic apart from the Epic international offices live within a 45-minute driving distance of Epic’s campus in Madison, Wisconsin.

As I mentioned on the call, I’m based in Los Angeles. I do not miss the Wisconsin winters. I much prefer never looking at my weather application on my phone. You don’t realize the mental burden of looking at your weather app multiple times a day until you move to LA and you never look at it because it’s pleasant every day.

I notice when I travel, do speaking and stuff, and when I go to the Midwest, I realize I don’t have a lot of warm clothes. I look in my closet and I’m like, “Why do I have so much rayon? That’s not a real fabric for anywhere other than Southern California.” You don’t know what to do with your sweaters.

We moved to Los Angeles and we had an entire closet filled with winter coats and we’re like, “We could probably scale back on these for now.”

The last question I ask everybody is, through my life and through my career, I’ve leaned on music and I use it all the time when I’m going to a big meeting or have a hard time at work. I listen to a couple of key songs that I call my pump-up songs. I asked all the guests what their pump-up song was.

I am a big ‘90s person. I love the fashion of the ‘90s. I love all the music of the ‘90s. I adore most ‘90s movies. My pump-up song is 1979 by the Smashing Pumpkins. I find it to be an instant mood lifter. Without fail, it helps me read my perspective and gear whatever is to come. 1979 by the Smashing Pumpkins is my answer. I highly recommend it.

I wonder what would happen if we asked ChapGPT if it had a pump-up song.

I wonder too.

Alya, thank you so much. I feel like we could do this every week and keep talking about all these emerging issues. I appreciate your time. Thank you so much. I know our audience is going to enjoy it.

Thank you, Stacy. It’s a total pleasure to chat with you and I hope to continue the conversation with you soon.

My name is Alya Sulaiman. I’m a partner with McDermott Will & Emory, a global law firm. I’m based in their Los Angeles office. I work on technology transactions and product counseling. I specialize in data use and artificial intelligence in digital health. I was in-house my entire career before joining McDermott in late 2022. The fun fact about me is that I play the drums. What started as a pandemic hobby is now my absolute favorite creative outlet. Banging on a drum kit once in a while is a pretty effective form of stress relief.

 

Important Links

 

About Alya Sulaiman

LEGD 2 | AI LawyerAlya Sulaiman helps clients navigate complex regulatory, privacy and transactional matters related to technology, with a focus on data strategy, artificial intelligence (AI) and machine learning. Alya counsels companies on legal and business frameworks for developing and deploying innovative technologies, including predictive analytics, decision support algorithms, electronic health records, interoperability tools, health data platforms and digital therapeutics. As a Certified Information Privacy Professional (CIPP/US), Alya draws on her extensive privacy and security law knowledge to help clients comply with regulatory obligations while achieving strategic business objectives and maintaining public trust. Prior to joining McDermott, Alya worked at the intersection of healthcare and technology as corporate counsel and director of health policy and regulatory affairs for a leading global healthcare software company.

Share this blog: