Legal Podcast Example: ROBOT LAW!!!

Cary Silverman, a partner at Shook, Hardy & Bacon L.L.P., discusses how robots at work, home, and in the community can change the way we think about liability when problems occur in what has become everyday life... with robots. 

This episode serves as an example of how law firms can approach producing their own podcast and was recorded with both Charles and Cary in our studio.

Podcast Transcript: Legal Podcast Example: ROBOT LAW!!!

Charles: Hello and welcome to Open to Influence. I'm your host, Charles Lipper, founder and CEO of Volubility Podcasting in downtown Washington, DC. Today we are joined by Cary Silverman, who is a partner at the legal practice of Shook, Hardy and Bacon, and serves as an adjunct law professor at George Washington University Law School. Cary assists clients with a wide range of complex legislative and regulatory matters. He has testified before Congress and most state legislatures (very impressive). His work focuses on product liability, tort and consumer law, and civil justice reform. Cary has published over 30 law review articles. He closely studies trends in civil litigation. Last year, he coauthored a report for the US Chamber Institute for Legal Reform entitled "Torts of the Future", which examined the litigation and regulatory environment for emerging technologies such as autonomous vehicles, drones, private space exploration, the sharing economy, and the internet of things. He's currently working on a sequel and is here to talk with us about one area this new paper will address, robotics and artificial intelligence. Cary, welcome to the show.

Cary: Thanks, Charles.

Charles: So Cary, you and I met shooting pool in a pool league when I moved to town in circa 2003. Where were you at in your law career at that point?

Cary: I was a third year associate at Shook, Hardy and Bacon at that point.

Charles: Okay. So you've been there a good many years now. It's now 2018.

Cary: Yeah. I'd just joined the public policy group at that point. So my practice was shifting from focusing primarily on regulation and working on litigation to looking more at the public policy aspects of the law, how it impacts society.

Charles: Okay. And so it looks like you're serving in a great many practices at Shook, Hardy and Bacon. We have public policy, appellate, product liability, mass tort, CPSC, product recall, national amicus... Is your day-to-day with all of these practices within Shook, Hardy and Bacon.

Cary: Oh, you never know. Every day is something new and fun. But one of the things I definitely enjoy most is working on following litigation trends, what the plaintiff's bar is doing, how the courts are responding, how state legislatures and regulatory bodies are addressing changes in technology, changes in the way we live and work. So that's really fun to watch and look for abuse that's occurring. Look for problems that are arising for businesses and how those can be addressed.

Charles: Cool, cool, cool. So most recently you came to me asking if I would produce a mock ad for a presentation you were making, and we'll take a listen to that now.

Mock Ad: Attention. This is a consumer health alert. If you or a loved one was hurt by a robot or other product with so called artificial intelligence, call The Tiger Group. Robots are commonly found in the home, in the workplace, providing healthcare, in stores, and on the street. Government agency reports have found that robots have resulted in serious physical injuries, harm during medical care, damage to property, death, and a risk of extinction of the human race. If you or a loved one has been harmed by a robot, whether as a result of malfunction, it's programming, or it's own actions, you may be entitled to substantial compensation. Do not delay. Call The Tiger Group right now. There is no risk. You don't owe us a penny unless we are successful. If you or a loved one was hurt by a robot or artificial intelligence, call The Tiger Group's robot hurt-line right now at 1-555-ROB-HURT. That's 1-555-ROB-HURT. Paid advertisement, non-attorney spokesman. Not licensed to practice law in any state. The Tiger Group is an advertising group that identifies and refers or sells potential claims to law firms. It is not a law firm. No representation is made about the quality of legal services performed by other lawyers. In some states legal services may be provided or assisted by robots. Consult your physician before deactivating any robot providing medical or personal care services.

Charles: All right, so you came to me with this project and it was titled "Robot Law!!!", and I thought it was intriguing and would make for a good podcast episode. So tell me, Cary, what is, what DC area voiceover talent Craig Klein would refer to as...

Craig Klein: ROBOT LAW?

Cary: Well, robot law in a way is not completely new. I mean, we've had automation and robots working in auto factories, working, moving goods from inventory in all kinds of industries for a very long time. So what's changing I think are a couple of things which gives rise to robot law and the implications of artificial intelligence. The difference now is in the past, robots were kind of confined to cages. They were set apart. They actually do call them cages.

Charles: We're talking about giant racks of machines on Silicon Valley or what are we talking about?

Cary: No, we're talking about in sort of an auto, imagine an auto parts factory or an automobile assembly line where robots are basically cordoned off from people. They're kept away from them, they're surrounded by something. They have their own area. People go in there, they make sure they shut them down if they need to troubleshoot them, fix them, repair them. And they're kept away from us. So one thing that's changing is robots are coming out of their cages now. They're free.

Charles: The separation, though, it's a safety issue though, isn't it? Wasn't that always the purpose? Like, A) to keep them dust clear so they can operate, but also to make sure no one gets trapped in robot arms.

Cary: Right. And that's traditionally. Here's a piece of trivia for you. How many people do you think have been killed by robots over the past 30 years?

Charles: Thirty years.

Cary: Say 30 years, yeah.

Charles: How many people have been killed? We're not talking like Futurama with a robot literally strangling you?

Cary: Uh, well sometimes it's been kind of like that.

Charles: Well sure. I mean, if we're going to talk about the meat packing industry...

Cary: Throw somebody in a grinder.

Charles: It's probably in the thousands, right? Yeah, it's probably a high likelihood in some industries that 30 years ago was probably a lot less safe than it is now.

Cary: True, true. And safety mechanisms have definitely helped, but it's actually fairly low. I mean it's about 30 people over 30 years.

Charles: 30 people in 30 years.

Cary: Reported to OSHA at least. That may be under-representing, but that's the number of OSHA complaints that have been filed over 30 years.

Charles: Okay. So 30 people over 30 years. Why are we sitting here talking about

Craig Klein: ROBOT LAW?

Cary: Well, because they're loose. Now they're coming out.

Charles: Oh, okay. So we took away the safety glass...

Cary: They're in your house now.

Charles: Okay. So that's it. Okay. So when you said they're out, I thought you meant in the factories, "Okay, we're taking down all of our safety precautions" in a factory.

Cary: Well, that is changing too. You can find robots in your local Home Depot, uh Lowe's. Lowe's, actually in some parts of the country where they'll help you find things in the store. I mean, so we're not talking...

Charles: Will they cut your wood for you, though? I really want somebody to cut my wood for me at the store. I hate doing that myself.

Cary:  I don't know if they do that yet, but I imagine they certainly could do that soon.

Charles: Sure.

Cary: But the first robot death was a guy in an auto factory. It was a five story robot that got stuck, and the guy had to go up to retrieve some inventory and it turned on, whacked him in the head. In the future, we're not talking about so much those kinds of robots anymore. They're going to be smaller. They're not going to be cut off from people, but they'll be out there. And the other difference besides from the size and where they're going to be is that robots are not going to be necessarily like another piece of equipment, another machine. They're going to be able to make their own choices as artificial intelligence is brought into them and they become increasingly able to learn and to make decisions. So, I'm not saying that we're going to have a flood of lawsuits from robots, but there certainly will be more as people interact with them in stores and in homes and in other places, and the types of things that are going to arise in the law... They may not be seen as just a piece of equipment anymore, but something different.

Charles: A coffee machine that rules my life. Right.

Cary: Right.

Charles: So it seems like the focus of your work in this presentation was liability, right? So tragedy happens due to a robot, who is liable for that? Is that kinda the crux of what you're doing now?

Cary: Yeah, that's what we're looking at. And there's different ways of going about it. Now, like I said, in the early lawsuits, in the lawsuits going up to date, have been traditional. Someone gets hurt in a factory, it's traditional workers' compensation claim against the employer and then probably a product liability lawsuit possibly against the maker of the robot, but it's the same kind of product liability claims that you would see in most other places of workplace equipment. So it's not a whole lot different. Failure to incorporate adequate safety mechanisms as you were mentioning before, being one possible type of defect that could be alleged, or a failure to warn of a hazard. So, those haven't changed much. You also see robots not only in factories, but there are some robots used in healthcare or robotics.

Charles: Sure.

Cary: And to date those have been also more like traditional medical equipment than they are robots. They're not autonomous. They are closely controlled by the surgeons when they're doing a laparoscopic surgery. So you have lawsuits arising out of those types of procedures, just like any other medical procedure where you get your standard types of claims.

Charles: Well now, you mentioned something critical there I want to focus on. So when you said they're in medical... so I'm, you know. I'm thinking the doctor is checking my blood pressure with this machine or

Craig Klein: ROBOT

Charles The surgery part... when I imagine doctors, maybe it's just because I haven't had many surgeries in my life, but of external robots is what I'm imagining. But surgery. When someone's got you cut open and a robot is assisting in that practice, that's scarier in my mind. So, do you wanna elaborate on that at all?

Cary: Sure. Well, I mean right now the equipment that's out there versus the number of safeguards that any developments in that area are going to have to get through. I mean the FDA is going to have to approve clinical tests for it, and it's going to have to be approved for actual use. So there's those kind of safeguards built in, and there's been a limited number of robotics approved for use, but when they are, they have been closely controlled by the surgeon thus far, completely controlled. They don't... they're not doing things yourself, but there are already a number of technologies that have been developed or are being to some degree tested not on maybe humans yet, but being tested that can do things like repair your skin and some pretty complex surgeries and possibly do them better than a doctor could because they have a more stable hand.

Charles: Well yeah, they can get in there and see. They're not looking...

Charles: They can get in there and it's less invasive, and it's also very close in what kinds of maneuvers that they can do. Even the steadiest hand surgeon is going to be hard to compare with what a robotic could do, but are they going to be able to anticipate when complications arise, and make the right changes that they need to in the way that a real surgeon would?

Charles: It depends on how many electrodes are hooked up to the patient. They're going to cover you with a full body scan.

Cary: I think from a liability perspective, with any kind of emerging technology, what we're concerned about is that court could see it as something completely new and different and sort of throw traditional principles of law out the window and expand law and expand liability in a way that can hurt the ability of the technology to take off and be actually used. And we've already seen this actually last year in one area in the surgery area. So, there's a principle of law for pharmaceuticals and for medical devices. It's called the Learned Intermediary Doctrine, and what everyone understands that a person who makes a drug or a medical device has an obligation, a duty, to warn of the risks of that device, right? So what the Learned Intermediary Doctrine says, it tells you where that obligation runs to. They have that obligation, but it says that the company that makes the medical device or the drug, they have a duty to warn the doctor, because only the doctor really knows what this drug or device is going to be used for with an individual patient who has their own situation, medical health condition. He knows that or she knows that and knows the best way of conveying those warnings, right? So the drug company or the medical device company typically does not have an obligation to directly warn a patient. They have no direct communication with that patient to begin with. But, but even if they did, they don't have the information to warn the patient. So the duty runs to the doctor. So last year, the Washington Supreme Court, state of Washington, ruled in a case involving Intuitive Surgical's da Vinci Surgical System, and what the court ruled there is that even though the company had conveyed what the risks were of complications to the doctor, they were not going to apply the Learned Intermediary Doctrine, because they found that the company had a separate duty, a separate independent requirement, that they had to give information to the hospital that employed that doctor. That's something new we haven't seen before in medical device litigation and something new. And one wonders whether, because this was a robotics technology and kind of viewed as scary that the court imposed some new liability.

Charles: Well all right, so right now a doctor can use whatever equipment he wants without anyone warning the hospital of it's precautionary measures?

Cary: The doctor would be operating out of that hospital and would be given permission by the hospital, have privileges in that hospital to do surgeries, but beyond that...

Charles: But they're not monitoring his equipment? I'm assuming...

Cary: I don't know.  They would know. They would obviously know what equipment is being used.

Charles: Okay.

Cary: But the manufacturer of that equipment wouldn't have some duty to come and meet with the hospital, and warn the hospital about what this equipment is used for and what the risks of it are. You know, that would go to the doctor. That would be the doctor's job to figure out how am I going to use this equipment safely.

Charles: Right. We're going to give you a scalpel. We're assuming you're going to use it for surgical practices. I mean that's obviously a dumbed down version, but is that basically what we're saying is the manufacturer of said scalpel doesn't need to explain what we use this scalpel for. We assume you as a hospital know that?

Cary: Yes, but it's more that the appropriate person, the person who is using the equipment, that's the person that you need to give the warning to give the information to, because he's the one who interacts with the patient directly and has the obligation under the law for conveying that information to his or her patients.

Charles: Gotcha.

Cary: The hospitals, that's not the hospital's obligation.

Charles: Got It. Okay, so lawsuits that have arisen, is this as a result of malpractice suits that doctors are being sued for someone who died on their operating table, and now they in turn are pointing the finger at this malfunctioning

Craig Klein: ROBOT?

Cary: It's when you have a bad outcome, a lawsuit is very likely to follow, and what we've seen with respect to the robotics and the hospitals in this particular piece of equipment is that they're pretty standard claims. They involve medical malpractice claims against the surgeon who did it. The hospital may be pulled into that as well, and then you'll see a product liability claim against the manufacturer of the equipment, so it's both.

Charles: Right. And I imagine, from a patient perspective, they're not going to find out what equipment was used until the outcome of a malpractice suit, right? Or if I have a loved one who dies on an operating table, am I going to ask, well, what equipment did you use?

Cary: Well, with respect to this kind of equipment they should know beforehand, so they know that this is what was being used.

Charles: Okay. We're going to use this new technology...

Cary: Right. And you should have been informed about the type of surgery and the risks involved.

Charles: Got It. Cool. All right, so the ad you had me do here, I guess the nature of the ad kind of reminds me of a Better Call Saul kind of thing, but is there a class action potential for these kinds of things, or are these all kinds of one-on-one lawsuits that we're seeing?

Cary: So far, these are all one-on-one kind of lawsuits that we're seeing. I think what there's the potential for and what this ad that we put together was based on. I had just come off doing another paper that you could find online that examines the public health implications of these late night ads you see or daytime TV ads for this drug is going to kill you, have you been injured, and they have the impact of really scaring people from taking their medications or even seeking treatment. So that made me think about this topic coming into it because if you search and do a Google or a YouTube search for 'robot lawsuit', you'll actually find ads that are not that far off from what you and I put together already, and this is just for what's out there. Some of these medical liability lawsuits, or also workplace injury lawsuits, you could find them. They're very close to what we did, but they'll get worse, and ours was a little over the top, but that's exactly what you see in the pharmaceutical context. And I'm reminded just last week Steven Hawking passed away, and one of the things that he's very well known for warning about. I'll read you one quote he had. He's a quotable guy, right? "We cannot know if we will be infinitely helped by artificial intelligence or ignored by it and sidelined or conceivably destroyed by it.

Charles: Okay.

Cary: So, maybe over the top. Maybe that is something that is going to happen. Elon Musk has had similar warnings to the government to say that AI (artificial intelligence) is a fundamental risk to human civilization.

Charles: Sure.

Cary: We need to act. So before we get to that stage, before we get there, we can be assured that there will be some injuries and when it happens, we will start to see the lawsuits. And I think it is possible that we'll see plaintiff's lawyers jump on that and put out things targeting "evil robots" and ask people if they have a claim to call. And what you'll see is maybe not class action lawsuits, but what they call mass tort claims. Personal injury claims against the device makers consolidated being gathered over the TV and the radio and then sort of transferred, sold, traded among members of the plaintiff's bar and packaged and large lawsuits brought in the future.

Charles: To me, it hearkens to the Matthew Broderick film, War Games. When the artificial intelligence said, "oh, well, clearly the humans are going to destroy themselves", so it initiates thermo-global nuclear war. So we've spoken extensively about how robots impact medical practices and medical industry. What other industries are seeing potential lawsuits?

Cary: Well, I think what's going to be interesting is as the technology evolves, how liability law treats these types of claims that are going to rise, whether it's a person who's injured in a store or by a robot. You could imagine there are things called basically CareBots or cobots being developed. They're apparently already big in Asia where maybe you have a robot in your house and you're getting older and it's helping you out of bed and get into a wheelchair. Oops, It dropped you.

Charles: Sure.

Cary: Things like that may happen sooner rather than later. So the questions are going to arise, especially when robots have the ability to learn and make their own choices as to who, regardless of what particular industry it happens in or what contexts, but who's liable in those types of situations. I think if you're a plaintiff's lawyer, you're obviously going to look to everyone who's involved for potential liability, but there's different, I think, challenges that are going to arise in each area. The issue is going to be who really controls the robot. They're going to be cases where there are certainly people concerned about hacking. People can...

Charles: Hacking computers, right? Not hacking people.

Cary: No, no. Hopefully we don't have robots that hack us. Although you could have robots hack to hack people, that is a possible concern.

Charles: Is that like peer to peer, like what, hack to hack?

Cary: Hack to hack, hack to hack the people. You know, you don't want that.

Charles: What?

Cary: So let's say you've got, I don't want to get into sex bots, but there's a lot of talk on the internet about sex bots.

Charles: Hey, their lawsuits are just as viable as anybody else's lawsuits.

Cary: There's people that are anticipated, there's gonna be people... You could buy sex bots and have them, you know, do what you want with you...

Charles: And yeah, well, physical injury could be a problem.

Cary: What if one of those gets hacked into by someone?

Charles: Oh.

Cary: And controlled by someone.

Charles: Oh...

Cary: As well as it could happen in any context. Any connected product.

Charles: Sure, yeah.

Cary: Do we, I mean, there are very big concerns about hacking and privacy.

Charles: Sure. So now, sorry to go back to the medical, but yeah, if someone's going to hack a robot, while they've got your chest open.

Cary: We're not talking too much here about autonomous vehicles, but they're obviously a subset of these robots. We've addressed that in a paper separately from this and I've tried to keep our discussion here focused on more of the, you know, bee-boop types of robots...

Charles: R2-D2?

Cary: ...than the honk-honk robots.

Charles: Gotcha.

Cary: But a lot of the principles we're talking about apply to autonomous vehicles too, so and those concerns, the hacking concerns have already come up in litigation there. That someone could get control of your autonomous vehicle and shoot you off the road or lock up your brakes. So same thing could happen with any other type of autonomous connected property. So there's those types of concerns going to arise. But from a more practical standpoint, and there's limits on the liability of product manufacturers for criminal acts of third parties. And there's limits on when you can bring lawsuits, or speculative harms, if it's just a potential of somebody gonna maybe could hack into a product that there's a vulnerability, but nothing's actually happened. Or maybe there's evidence that it could happen or it did happen, but no one got hurt. So there's limits on the ability to bring those types of lawsuits, so they kind of pose an obstacle to those data privacy and potential injury types of concerns. But outside of that, as robots gain the ability to make choices and take actions that you don't really control, there's practical implications for that. I mean getting off the bee-boop robots, but let's say you have an autonomous refrigerator.

Charles: Sure.

Cary: And it is tracking what you have in your fridge and it is via Alexa or something ordering your own food for you or other things. Are you on the hook for that? And it goes and decides it's gonna order you all the crazy organic, overpriced stuff because it thinks it's healthy for you and you get a really big bill. Are you bound to that decision? Or you could imagine you got some sort of robot nanny and they are susceptible to manipulation by my children who say, I really want to buy... You should really get me that extremely expensive automatic Jeep that I could ride in the front yard for $600. And it goes ahead and buys that. Are you liable for that? So...

Charles: I imagine we're already... I think there's some kind of security measure with Alexa that it knows your voice and not to allow your child to order on Amazon for you, right? I think, right?

Cary: I hope so.

Charles: I think so. I think that's up to the parent. If you let your kid in to ordering on Amazon.

Cary: So you know, you never know. This new autonomous refrigerator and Alexa could come to some agreement and they decide they're going to come after you for this.

Charles: We need to get our economist in here and see if Amazon's picking the one product you're allowed to order.

Cary: So you've got those practical situations, and there's areas of the law that exist that you could look to for how do you handle that situation? You've got agency law, right? Where you could hire someone, and you can give them a parent or actual authority to do things on your behalf, so if you have an autonomous robot and it goes out to the store, drives in it's autonomous vehicle and goes into the supermarket or it goes into the jewelry store and buys something, and it seems to have your consent to do that, it seems to be working for you and a sale is made, principles of agency might bind you to that decision. There's also other areas where you don't have complete control over people, but you're still liable like employment law. I mean it is sort of a subset of agency law. Respondent superior where you have your employee goes out and they're making deliveries and they get into an auto accident. Even if it's in their car, if they were out doing it in your service, something that benefited you, even if they were completely negligent, grossly negligent, you're still going to be on the hook for that.

Charles: Right, you're liable for employing that person.

Cary: Yeah. So very similar principles could apply to an autonomous robot or car or something like that.

Charles: And we have robots delivering stuff around DC right now. You see them rolling around. Has there been any lawsuits come up from those yet?

Cary: Haven't been. Haven't been... Yet there was the first lawsuit recently against an autonomous vehicle manufacturer in California. It was a motorcycle accident with an autonomous car that was being tested, had a driver in there, but wasn't 'no hands on the wheel', just completely autonomous.

Charles: Yeah and we have to assume at that point the vehicle manufacturer is liable because they've informed the consumer, "hey, it's safe for you to take your hands off the wheel".

Cary: Right. So the driver didn't sue. The driver was fine. But the motorcycle that had an impact with the vehicle did bring a lawsuit, and they brought the lawsuit against the manufacturer.

Charles: Right.

Cary: Didn't name the driver.

Charles: That makes sense.

Cary: At all. It's just the manufacturer. But what's interesting about it is, and maybe we'll see this in other contexts, is that it was only a 4-page complaint. I've seen longer complaints charging that Trix cereal has too much sugar in it for 150 pages. But for the first lawsuit against an autonomous vehicle? Four pages. And all it said was the vehicle didn't act like a reasonable person should've. That's all it said. It wasn't even a product liability claim. It wasn't a complex, "let's look at the design and the programming". It was just, a reasonable person would have seen the motorcycle.

Charles: Is it Cadillac right now has an ad on that's saying, okay, we have a handsfree freeway feature in your car. And it has a video of this gentleman sitting in his car and he takes his hands off the wheel on the freeway and leans back and smiles. Should we trust these things?

Cary: So there's a couple of manufacturers that have autonomous mode or something like that, but I think they all, at this stage at least, instruct that you have to be sort of ready. There's no nap taking. We still have steering wheels in the vehicles.

Charles: So well if that the case, who is liable? Is it the manufacturer or the person for not babysitting their robot?

Cary: Well, we shall see.

Charles: We shall see. All right.

Cary: There hasn't been a whole lot of litigation there yet. One other area that you can look to for situations where liability rises in some cases, and you don't really have complete control over whatever caused the injury, is pet law. Your Fido, right? So you've got a cute little puppy, right? And it's always been nice and good to kids. I was in the park, and this little dog was jumping on our kids and they were having fun. Well, one of them was having fun. The other was scared. But if the dog has a history of being just fine... You know, a lot of states have developed the law that they call the 'one bite rule', that until someone basically has notice that a dog has a vicious propensity, some reason to think that it's going to hurt somebody, you're not liable. You have to have some level of knowledge and then you're on the hook.

Charles: But until that day, I am not letting anyone's dog sit next to me on an airplane. That's where I draw the line.

Cary: Right. So as the same kind of principle could apply with robots, and you also have in some states they've adopted dangerous dog laws where you know, there are certain breeds that are considered dangerous. In the robotics area, you may have certain types of security robots or otherwise that may have, or larger robots or different kinds of technology that have more of a potential to cause an injury. Different levels of autonomy too.

Charles: So I should keep my electric toothbrush to myself is what you're saying.

Cary: Your electric toothbrush, maybe you know, even if it knows your teeth are too dirty and has a little bit of learning, it's probably not going to stab you in the forehead, right?

Charles: It's probably not going to bite. But it might. There is a one-bite rule.

Cary: Yeah.

Charles: And on that, I think we got a lot on this one. Thank you, Cary, for joining us. To learn more about Shook, Hardy and Bacon, please visit SHB.com. If you liked what you heard today and think Cary Silverman or Shook, Hardy and Bacon should have their own podcast, please let us know at Facebook.com/volubilitypodcasting, on Twitter @VolubilityPod, or you can email me directly at charles@opentoinfluence.com. If you enjoy Open to Influence and would like us to create a similar podcast for your organization, you can email us at info@volubilitypodcasting.com. Thanks for listening to...

Craig Klein: ROBOT LAW!!!

Charles: And extra special thanks to DC area voiceover talent, Craig Klein.

Links from this podcast:

https://www.shb.com/

http://www.instituteforlegalreform.com/research/ilr-research-torts-of-the-future-ii

https://volubilitypodcasting.com/

Twitter: @VolubilityPod

https://www.facebook.com/volubilitypodcasting

info@volubilitypodcasting.com

charles@opentoinfluence.com

Charles Lipper

Charles Lipper, Founder & CEO of Volubility Podcasting, has been working as a post production audio engineer since 2000 and a voiceover talent since 2005. His love of meeting fascinating interview subjects and crafting compelling stories through audio led him to open Volubility Podcasting in 2017.

Previous
Previous

Course Correcting Wellness

Next
Next

Top 5 Reasons Why Your Law Firm Needs A Podcast