The Legal Department

6 Steps To Make Your Organization Cyber Safe And Future Proof With Dominique Shelton Leipzig, Mayer Brown

The Legal Department | Dominique Shelton Leipzig | Cyber Safe

 

Are you overwhelmed with worry about looming cyber threats and feel like your organization is a sitting duck? Dominique Shelton Leipzig, a privacy and cybersecurity partner at Mayer Brown, and also one of the world’s leading thought leaders in digital privacy and data governance is here to give you a plan of action. On today’s episode, Dominique shares a 6-step action plan for how The Legal Department can help make organizations cyber safe and future proof. Dominique’s passion for data leadership will motivate you to take action. A lifelong champion of women, Dominique also co-founded NxtWork, a non-profit helping women land corporate board seats.

Listen to the podcast here

 

6 Steps To Make Your Organization Cyber Safe And Future Proof With Dominique Shelton Leipzig, Mayer Brown

On this episode, I’m very excited to welcome a longtime friend and expert, Dominique Shelton Leipzig, who is the Cybersecurity and Data Privacy Partner at Mayer Brown. She leads their firm’s global data innovation team, and in her spare time, she’s the author of four books, including the bestselling book, Trust.: Responsible AI, Innovation, Privacy And Data Leadership. Dominique, welcome to the show.

Thank you. I’m glad to be here with you.

I’m thrilled to have you here. When I conceptualized the idea for the show, you were already on my radar screen as somebody I wanted to include for many reasons, not the least of which is I respect you as a leader having worked under you when you were President of Women Lawyers of Los Angeles, but also you are a thought leader in this very intimidating space of cyber and privacy. I’m excited to have you here.

Thank you. I remember our days working together on the Board of the Women Lawyers Association of Los Angeles. As soon as you reached out, I was thrilled.

Why Cybersecurity And Privacy Are A Must-Do

I appreciate it. I typically start with more of what we’re going to talk about, but for this conversation, I wanted to talk a little bit more about the why. We all lawyers, especially in-house lawyers know that cybersecurity and privacy are things that they must do or my kids’ teachers say, “Your must-dos and your may-dos.” Cyber and privacy are must-dos. I have to say as I open the episode that it is a very intimidating space. I’ve even had a listener email me and say like, “How do I get myself to like privacy because I don’t?” I know you have a lot of passion for it. Any thoughts on how we can reframe ourselves to not be intimidated or apprehensive around the topic areas?

It’s exciting because data, digital, AI, and other emerging technologies are going to transform all of our lives. Every single aspect of educational employment, financial health, transportation, power grid, water supply, everything is already very much controlled by digital. It’s going to be even more so as the AI deployments begin at scale. Everybody is already impacted by data. It’s a great idea to be able to understand how to lead in that area, understand how it impacts your own life, and then therefore take control.

I like taking control. Certainly, I have to say in boardrooms at conferences, consultants, I feel like the is, it’s not if. It’s when you’re going to be hit with a cyber attack and that makes you feel out of control. It makes you feel scared and, “I’m in a cower. I don’t know anything I can do about this.” You don’t believe that. You have a different view.

Six Steps To Make Organizations Cyber Safe And Future Proof

We can’t prevent the criminals from coming in and trying to deploy ransomware attacks and crippling attacks on our infrastructure going after, during the summer with all the tensions going on in the world. Seventeen hospitals were taken offline by criminals, most likely state-sponsored outside of the US. those are things that we certainly cannot control, especially when there are national security tensions around the world. What we can control is the vulnerability of our organizations to those exploits. That’s the wonderful thing. With cyber privacy AI, there are essentially six steps to make your organization safe and futureproof. Those are the steps that every company should be taking to minimize the impact of these, dangerous exploits.

Let’s get into these six steps and through the lens of the in-house council’s role in deploying these steps.

First and foremost, it’s an understanding that there needs to be someone in the organization whose primary responsibility is data. It can be a couple of people. Some companies have data teams. Others have a chief data strategy officer, a chief information security officer, chief privacy officer, but there has to be somebody in the organization whose primary responsibility is taking care of data. It doesn’t have to be their only responsibility, but a primary responsibility is data.

There needs to be someone in the organization whose primary responsibility is data. Click To Tweet

Secondly, that needs to be documented. There needs to be a policy internally, and that’s what the lawyers help draft to document who’s going to be responsible because if we don’t name it and nobody claims it, there’s no accountability. You want to have that that someone’s keeping an eye on what’s going on in the organization. The second thing is you need to know what you’re keeping an eye on.

That means having an inventory of your data. You can’t apply governance protections or any prophylactic measures if you don’t know what you have and where it is. Getting a handle on that is for companies that are serious and want to move on can be done in 2 or 3 weeks, or it can take some time, three or four months. Once you decide, “I’m going to go in here and find out what we have going on, what are we collecting? Where are we storing it? Who are we sharing it with?” Those are the three key things to understand.

I feel like doing a data inventory, and I was at a conference not too long ago where someone said, you need to know where your crown jewels are. Many companies know, like certainly in healthcare, we know our EMR or Electronic Medical Record is a cash cow and/or our ERP system has other data, business data. I think we’ve talked before that there are places that collect data that you may not think of, like your parking garage swiper or website. Obviously, there’s a lot of litigation around cookies and all that. What is an approach? I want to dig into this a little bit. I feel like as you’re saying, knowing what you have is a real fundamental step. What can we do to figure out where all the data is and what we have?

Eventually, this can be automated. There are some wonderful organizations. There are Ethical AI, BigID, there are there vendors out there that do data maps and data inventories by deploying code throughout even fractured and not contiguous infrastructure network architecture, and identify where data is, then you can sweep up with some punctual interviews and areas where personnel data, for example, might be collected. Thinking about the procurement team, legal marketing, healthcare, and product teams, but the financial services, all that. Those are likely to have caches of personal data. Also, increasingly, the AI teams, because they’re looking at data to train models and they have a repository of data that they’re using. For other reasons, we’ll talk about it. It’s good to know whether you have personal data that’s being used to train any customized application of your models because you need to see if you have permission to do that.

Those are some of the areas where divisions of an organization that we typically talk to, that I tell clients to talk to find out what are they collecting is part of their day-to-day personal information can include B2B stuff. That’s why I talked about the procurement aspect. The second step is talking to the leaders of those divisions to find out where personal information is collected and getting a good list of what is personal information because there’s confusion a lot of times. The marketing team might not realize that device IDs and persistent identifiers, driver’s license plates when people are driving into the parking lot, all that is considered personal information in California. Let’s start here, but in most states as well as around the world. Give a full handle on what is personal information so that you’re asking the right questions and your team knows how to answer it.

If you say, “Are we collecting personal information?” That may not be specific enough, but if you say, “With our digital ads,” for example, “Do we use persistent identifiers? Do we use device IDs? How are we able to target our customers?” then it’ll come out. That’s what they’re using and that’s considered personal information. That way you get a full handle of, “What do we have? You might find that there’s personal information lying around in the company that nobody’s using and you don’t need.” That’s an opportunity right then and there.

Clean out the closets. Get rid of it. I’m overwhelmed by that step. I’m sure I’m going to call you after this and get some more tips. What is the third step?

The data inventory is key. The third step is a legal risk assessment. You need to know what laws apply to your use of data in the jurisdictions where you are doing business. A legal risk assessment will help you understand that. Why do I talk about a legal risk assessment? Looking at it from the other lens and these six steps that I talked to you about come from a review of over 167 FTC complaints and activities that have triggered investigations. The French Data Protection Authority the six steps to comply with GDPR on their website back in 2017. Our 167+ FTC enforcement orders fit very neatly into those 6 steps. I wanted to tell you the origin of that. What we looked at is around the world, there are now 161 countries with data protection laws.

The Legal Department | Dominique Shelton Leipzig | Cyber Safe
Cyber Safe: Laws apply to your use of data in the jurisdictions where you’re doing business. A legal risk assessment will help you understand that.

 

 The 6 steps work well in those 2. This is a tool that you can use everywhere, but you need to understand what laws apply to you because if there is a slip-up, a privacy issue, a data, breach, or something, and your CEO has to testify in Congress, we’ve got six tech CEOs testifying, you want to be in a position for the CEO to be able to say, “We were aware of the laws that applied to us, and these are the steps that we did take to mitigate risks.” In a private investigation, when I’ve been defending clients, I don’t litigate anymore, but I do defend clients in regulatory investigations, this is the first question they ask. In addition to what training and vendor management you have and policies, those all get produced, but the very first question they ask in an interview is, “Where’s the legal risk assessment of all these policies and things hang off of? What risks were you mitigating when you put this together?”

It’s the same thing with HHS, etc. If you don’t have a legal risk assessment, all of that great policy work that you did and mitigation work, literally, I’ve seen regulators and investigators physically move piles. The client put it together to the side and said, “We’re going to do one right now because this could have been copied from anywhere. Let’s go.” You don’t want that. That’s a bad position to be in as a company.

That seems pretty like avoiding these self-inflicted harms. This is something you could proactively do ahead of time, be prepared, and be able to pull out your risk assessment in those meetings.

For high-risk, any kind of sensitive data, race, gender, ethnicity, sexual orientation, religion, trade union membership, health, financial children, precise geolocation, those sorts of things. They want an impact assessment short form, like two pages, with different templates that exist we certainly have them at our firm, but the bottom line is they want to see what extra steps you’re taking to protect that kind of data. In Phase 5, the overall mitigation that’s being put in place to address the risks that have been identified in Phase 3 legal risk assessment and Phase 4 impact assessment for high-risk data. That means vendor management. What kinds of things do you want your vendors to do to protect the data that they’re collecting on your behalf or that you’re entrusting with them, to store, and train, not just your employees who are dealing with personal data or sensitive business-critical data, but everybody in the enterprise to say, “These things are important to us.”

In the most successful companies I’ve seen with this, the chief privacy officer, and general counsel does a two-minute video with the CEO and right then and there, everybody gets, “This is a culture of trust in the organization we’re serious about making sure that we are protective of the data that our customers are entrusting with us or that we are using on their behalf.” Training of the board is increasingly important because companies and organizations are being sued many times on the basis of the lack of board oversight on these issues, or it’s never briefed on the topic, or they are briefed sporadically, but they’re not kept abreast of the threats in your industry.

That is a recipe for exposing the organization. In addition to vendor management and the training of the employees and the board, they’re looking for governance policies to be backed up by actual tech. You need a governance policy that says, “This is our stance. This is how we handle data. This is who has access to data.” Generally, the rules of the road for the company are then backed up with technical support to prevent the rules of the road from being abrogated. The sixth step is an auditable record of everything I talked about earlier.

It’s very clean and straightforward sounding, but a lot of effort and I’m thinking in my own organization, we have robust cybersecurity and I think a great culture. I still think there’s an opportunity. All organizations have opportunities. I like how you’ve broken it down, but I do have a little bit of a pit in my stomach that there’s a lot of work to do.

What I look at it as an increase, it has been elevated to this. I’m thinking about Davos where the theme was rebuilding trust and the World Economic Forum releases every year at Davos the top ten global risks. For the first time, AI got on that list. Cyber is already on it. Every CEO and board member around the world looks to that list to try to see, “How we mitigate this?” There’s enterprise opportunity and enterprise risk. Cyber and AI are right there. rather than treat it as something tangential, the six steps is to get you going, but eventually, the idea is to fold it right on in with regular enterprise risk management because that’s what it comes down to.

How In-House Counsel Stay Abreast

It’s not a set-it-and-forget-it. It seems like it’s a living organic function that you have to nurture. In your book, Trust, which I’ve read and enjoy and recommend, you do make the observation, and I appreciated that it’s unrealistic for the general counsel to know everything about cyber and privacy, but, and to be able to anticipate all the opportunities and risks that are out there in these areas. I’m sure you work with a lot of folks, in my position we have ten plates spinning at any one minute. It’s difficult to keep your eye on any one of those. What can a general counsel or in-house counsel do to stay abreast or keep this front and center?

The Legal Department | Dominique Shelton Leipzig | Cyber Safe
Trust.: Responsible AI, Innovation, Privacy and Data Leadership

First, get help. We have a team of a hundred lawyers at our law firm in our cybersecurity and data privacy group separately. Our AI group is a cross-disciplinary team of 60 partners, and we all have teams. We need that to be able to tackle the issue at hand, which is on the data privacy front and data protection front with 161 countries with data protection laws around the world. AI, 97 countries with draft legislation and regulatory frameworks around the world. It takes all of us as a village to distill these things down into these six steps or this approach, thought, and effort. The idea that it can be done by one person is unimaginable to me because I know the team that it takes.

This is your day-to-day and you still have 99 other people that you’re relying on.

There are 99 other people on the cyber and privacy side and another 59 on AI, cross-disciplinary, IP, antitrust, bias, and testing accuracy. It’s not my area. First of all, a realization that help is needed. That’s number one. Number two is also figuring out, “Help is needed, but we need to also be cost efficient and we have our budgets and forth to start looking at differently.” What I’ve been looking at in terms of AI is more on the standpoint of this is an innovation protection, this is an innovation releaser.

Let me give you an example. On January 20th, a major logistics company, very similar to our overnight carriers gave us stuff overnight. Mayer Logistics company in the UK implemented a chatbot to help with customer service like a lot of our companies are doing in these frameworks, these legal frameworks for AI chatbots are low risk. They’re not considered a high risk. A lot of the governance that I’m going to be talking to you about. AI doesn’t apply. Customers know they’re dealing with the chatbot and that’s about it.

To your point about set-it-and-forget-it, that’s not something you can do with AI either. They put the chatbot in place and the next thing they knew it was accurate at the beginning, responding to customer questions, and doing great. They whittled down the customer service team to allow for more ubiquitous use of the chatbots. Suddenly they found out, not because they monitored it, but because it was viral on X, formerly Twitter. Their chatbot had been swearing at customers and telling the customers that this literally writing HAI coupons about how terrible chatbot service was, how terrible the company was, etc.

It’s funny because it’s not me.

There’s a little humor to it, but imagine if that’s our power grid and the chatbot is off, or it’s our health system or financial services. That is why it is important to pay attention to these issues because data is infusing every aspect of your business and our society. It’s very important to put the guardrails in place so the company’s at least alerted if you at least knew inside your chatbot tool, there was a guardrail that said, “We want to be alerted if swear words are used in any language, we do business.” The dashboard electro, not that you have to sit personally and watch the chatbot to make sure everything that says is accurate. We install some codes so that the team that deals with and repairs these AI tools if the chatbot goes outside of your guardrails, the IT team can go back and put it back in using logging data and metadata.

It is so important to pay attention to these issues because data is infusing every aspect of your business and every aspect of our society. Click To Tweet

That’s your comment about governance, which I do want to get into. For the in-house council, because of three different client alerts on AI issues. I read something in Becker’s about the DOJ issuing subpoenas for the use of AI in electronic medical records. It’s hard to keep up. I appreciate. We need to have someone like you on speed dial. I’m wondering if you have other suggestions. A few years ago, I got a certification from the IAPP, which I know you’re on the board of. It was a way to force myself to put my head down and focus on this for a while. Are there other ideas or ways that folks can keep abreast or is there a resource that they should have? Reading your book and having that on the shelf is helpful, but any suggestions for resources to be steeped in this?

Yes. I love that you mentioned the International Association of Privacy Professionals. It’s a great resource. The website has much free content that is useful. Everybody can go to IAPP.org and follow that website. Sign up to get the alerts and the daily dashboard that comes through is super helpful in terms of headlines of what’s happening on the privacy front and increasingly on the AI front because IAPP now has an AI governance center. I’m excited. I do sit on the advisory board for that.

They’re on the cutting edge. Other organizations, The Future Of Privacy Forum and Center For Democracy and Technology are very good for issues. Bloomberg has a privacy law newsletter that is very helpful. Privacy Law 360 is another resource. I’m looking at what I see come through every morning. I also like the regular newspapers, honestly, Financial Times, New York Times, and Wall Street Journal. I look at those every morning. There’s usually a headline, something that’s happening with all these issues.

You can see right there, “I would not want to be this headline coming back out of what happened to that company. Let me reverse here what I can proactively put in place in my organization.” There are all kinds of tools as well. Radar First has a tool that amalgamates all of the legal developments so that you can see them in different jurisdictions. One trust has something called DataGuidance where it digests all of the laws in various areas and new legislation and forth. I often have associates look there as well. The bottom line is this is an evolving area.

Those six steps I told you about, privacy and cyber, the AI six steps are a little bit different, but there are still six of them. The precepts are there, but if you do those things, you’re going to be about 80% along the way of what everything is requiring, then there might be a little bit of adjustment jurisdiction by jurisdiction on some outlier things, but the bulk of it is going to be caught with the six things I talked to you about because those six things are almost in every law as far a it goes.

My takeaway from the GC chair is that I and other people in my chair need to devote a good portion of our practice to staying abreast, building relationships with cyber professionals like yourself, implementing and maintaining the six steps, and then staying abreast. I don’t know if this answers my readers’ question about, “How do I make myself like I?” It is a very concrete way to think about what I need to do in my day to day.

The thing about it is if there’s a terrible ransomware attack, you can’t get into your system and you don’t know what is on a particular system or where the crown jewels are, maybe some of them, but there might be some others that you don’t and it’s, “I’m dealing with this in the middle of a crisis,” It’s much better to do it ahead of time, especially the prevalence of these attacks than to wait and have the first time that you’re in the incident.

Best Practices For AI Governance

Learn on the job. I feel like AI is at our heels and it’s certainly everywhere in media and in legal publications. I had at least three client alerts about it. As we talked about before, governance is one of the foundational steps as we embark on this next frontier in technology. Can we talk a little bit about what best practices are for AI governance?

This is good news because it’s very doable and doesn’t take much effort beyond what is already happening in the companies. It’s a matter of redirecting the energy to align it with the legal requirements. There’s a lot of effort going on in multiple AI governance teams and multiple organizations. They’re working earnestly and working hard, but they’re completely untethered to the 97 countries and 6 continents legal frameworks that have been laid out about how governments around the world, including the US, want governance to happen as far as AI. Rather than develop something bespoke for your organization, what I encourage my clients to do is to look to that forthcoming tsunami of legislation and map now to that rather than develop something. Be generous which is likely to be rejected.

Let me take a side toward privacy for a second. In 2003, we knew where privacy was going and how it was going to be enforced as it was being enforced. The companies that figured that out sooner rather than later are doing great. $3 trillion market cap club. There are two of them that are early adopters. We can all think of a few tech companies that are at zero market cap, quite literally. This is not nice to have when you see a tsunami of legislation going in a certain way, rather than being the last to adopt. it’s a good idea to be proactive because you’re going to then garner the trust that you need to be a successful company. Trust is foundational.

It's a really good idea to be proactive because you're going to then garner the trust that you need to be a successful company. Click To Tweet

I’m sure that is why you titled your book.

I did read that trusted companies are 400% more profitable than the ones that aren’t trusted. People have good instincts. Think about it. You want to work with and spend your dollars with companies that you trust. These data bloopers can rush out over trust.

A little bit of a rant on privacy, I struggle with our culture where people are so open about many things in their lives, posting what they eat for dinner, exchanging inappropriate or pictures, etc., then yet if there’s a cookie on their on a website they visit, it’s a federal crime. I struggle with a little bit of the laws are heavy and there’s so much regulation, but a significant part of our culture is very open on certain topics.

That’s one of the things the senators are talking with our tech company CEOs right now about as we’re speaking, especially when it comes to the children putting up explicit content. We have to protect people from that. A lot of it is digital literacy shows like this one and bringing up awareness about that duality that we have in oversharing. It’s happening with AI as well. People are putting in their medical, notwithstanding the terms, even though we’re lawyers and we write them and nobody reads them.

You just click, “Can I get to the shopping cart?”

The terms do say, “Please do not put anything sensitive into this chatbot. Don’t put questions that are sensitive, your health, social security number, and things of that nature,” but not withstanding that is happening. That’s a way to raise awareness about the usage of AI. Let me back up. The first thing you need to do in all of these laws and frameworks is to risk-rank the AI. I liken this to coming to an intersection and you have a stoplight. There’s red where you notice stop. In the AI world and regulatory frameworks in 97 countries, including the US 6 continents, they say in certain activities, they don’t want private organizations doing. That’s the hit the brakes, red light, things like social scoring, looking at what somebody’s doing online to make a decision about whether they are your customer.

Another area that’s prohibited, is things like monitoring individuals in public places. They don’t want private companies doing that. Frankly, they don’t vote on law enforcement doing it unless there’s judicial approval for it. There are seventeen exact areas that are extinct because of prohibited AI. I talk about it in the book. Those will evolve over time. When you read the book, it’s a starting point because you need to keep abreast of when additional prohibited use cases get on that list. Whenever you see those, they should be tapping the brakes, and companies now that are training models, it’s expensive to train models for training your own application. Every GC, board member, and CEO ought to know what the seventeen things are and ask the question at least, “Do we have anything that we’re building that’s on a known prohibited list anywhere?” Usually, if something’s happening in Singapore, eventually it’s going to roll over to the US.

That was a helpful comment in our prep session that you said, “Be aware of what’s happening in all the other countries because it will land here.” We know from our privacy experience in 2003, that will be the law of the land everywhere. There are kinds of people who are zebras. They want to run in the pack. Governments want to do what’s everybody else doing.

The sooner you get with that, the better risk ranking. The red, the other thing you should know is where you have a green light. Chatbots generally are considered low-risk in all these jurisdictions. It doesn’t mean that a chatbot can’t be embarrassing like the example that I gave you. I was swearing at customers, but the governance that’s required under the legal frameworks is putting a notice next to the chatbot that people are interacting with. Frankly, that’s already the law in California. It has been since 2019. Not everybody follows it, but that is the case. The meat of the governance is in the high-risk, the yellow light where you can go into the intersection, but you need to use caution. The regulatory frameworks are very didactic. They tell you what that caution should mean. That’s what the AI governance teams need to titrate on and focus on.

Making A Difference With NxtWork

Focus on yellow. I feel like we could have a series on all of these topics. As we’re winding down, I want to talk about another side project that you have, which is helping women and people of color move into board seats. Can you talk a little bit about the nonprofit you started NxtWork?

Thank you for asking that. Women’s rights, organizations, and leadership are very important issues for both of us being on the WLA Board or Women Lawyers Association of Los Angeles board for so long. I started NxtWork with two co-founders because, in the aftermath of the murder of George Floyd, we heard many companies wanting to take a fresh reimagine their leadership and the issues that they were focusing on at a broader level. It occurred to us that we know a whole group of amazing women of such as yourself, that would be amazing on boards.

It was an awakening that if a board or a C-Suite isn’t diverse already, how are they going to plug into this network of amazing women? Where do you start? Where do you begin? That’s where we started NxtWork.org to be the next level of connection and it’s premised on meaningful engagement. We coined the term meaningful engagement as a force multiplier to achieve diversity.

The organization helps women land board seats. You know the hardware. Any success stories you want to share?

I’m excited. We now have six board placements that we take a lot of pride in. First, we recently placed Kalinda Raina, who is the Chief Privacy Officer of LinkedIn with Vista Equity Partners, a private equity portfolio company, RadarFirst. That was effective as of January 1, 2022. We’re super excited about that one. We have even placed men. There was one particular board opportunity where they were looking for a chief, someone to head their audit committee, and they wanted a former CEO. We have a brilliant member who is a finance professor at Berkeley. She hasn’t been a CEO, but she’s amazing and would be perfect for an audit committee if we put her forward first. For their own specific reasons, they wanted a sitting CEO or a former CEO to chair the committee first.

We tried with our own existing member and when that didn’t work. We said, “Let’s look.” There was a diverse man that I knew. I put his name forward and he’s sitting on as head of the audit committee now. That was fun, not just to be a funnel and a force multiplier for women, but to help men along the way. We have three of our members that got on boards as we started talking about it as an organization, raising awareness, we started talking about, connecting with recruiters, different people, women who were sitting on boards came in and we meet once a month on the first Friday of each month and different sitting board members, like Jan Babiak, who’s the head of head of the audit committee for Walgreens, came in and spoke to us and other board members came in.

That was a catalyst for three of our members who did end up getting on boards through the discussion and the energy. They started their board journeys, talking to people and expressing their interests. What I want to say is the power of the idea is that we unleash in all of the women the knowledge and understanding that they don’t need to ask permission to be on a board. Once they say they’re interested, these opportunities start coming toward them and then we make it as a funnel. The thing that is a differentiator for us is we take over the introduction part.

The Legal Department | Dominique Shelton Leipzig | Cyber Safe
Cyber Safe: We unleash in all of the women the knowledge and understanding that they don’t need to ask permission to be on a board. Once they say they’re interested, these opportunities start coming towards them.

 

That can be hard. It feels like a high bar to almost impossible, “Where’s the doorknob? How do I get in?” That’s powerful. With all that you do in your thought leadership, you’re very extensive and busy law practice. It’s amazing that you’re making this contribution to women as well.

I’m excited about the impact these women will have.

Pump-Up Songs

The last question that I ask all guests, which is a little lighter than cybersecurity and AI is what is your pump-up song?

In order to get out of bed sometimes, my husband will put on Beyoncé. I don’t know which one because it all has a lot of get up and go, but one of those if I’m dragging, he’ll put that on and say, “Come on. Let’s go.” I guess that would be my pump-up song. I like Taylor Swift. I like all of her stuff, but the Midnights (3am Edition) album is one of my favorites. There’s one song on there, Snow On The Beach. It’s not a pump-up song. It’s more quiet, but I do listen to it while I’m working.

Those are awesome. I love Beyoncé and I love Taylor Swift. You have a great taste in music in addition to all your other accolades. Dominique, thank you so much for being on the show. I enjoyed it. This is a really meaty episode that the audience is going to love.

Thank you so much for this show and for doing this important service to bring the legal department and community together.

 

Important Links

 

About Dominique Shelton Leipzig

The Legal Department | Dominique Shelton Leipzig | Cyber SafeDominique Shelton Leipzig is a Privacy & Cybersecurity Partner at Mayer Brown. She is an authority on how companies can transform their governance to be responsible data leaders by focusing on legal trends in AI, privacy and cyber. She leads the firm’s Global Data Innovation Team, the first data team to focus on CEO and board level advice for digital transformation. She has been practicing law for over 30 years after obtaining her law degree from Georgetown University Law Center, and doing her undergraduate work at Brown University in International Relations and French Civilization.

Dominique has advised on the global strategies for responsible data leadership for hundreds of companies, with a collective market cap of over $3 trillion. She has trained over 50,000 professionals on privacy, AI and cyber including Fortune 100 CEOs and board members, and, at the request of the California Supreme Court Chief Justice, trained all California judges on data privacy.

Share this blog: