
Today we’re talking about the messy, fast-moving situation at Anthropic, the maker of Claude that now finds itself in a very ugly legal battle with the Pentagon.
The back-and-forth is complicated, but as of a few days ago, the Pentagon had deemed Anthropic a supply chain risk, and Anthropic has filed a lawsuit challenging that designation, saying the government has violated its First and Fifth Amendment rights by “seeking to destroy the economic value created by one of the world’s fastest-growing private companies.” I can tell you right now: We’re going to be talking about the twists and turns of that case on The Verge and here on Decoder in the months to come.
But today I wanted to take a moment and really dig in here on one very important element of this situation that’s not gotten enough attention as this has spiraled out of control: how the United States government does surveillance, the legal authority that allows that surveillance to occur, and why Anthropic was distrustful of the government saying it would follow the law when it comes to using AI to do even more surveillance.

Verge subscribers, don’t forget you get exclusive access to ad-free Decoder wherever you get your podcasts. Head here. Not a subscriber? You can sign up here.
My guest today is Mike Masnick, the founder and CEO of Techdirt, the excellent and long-running tech policy website. Mike has been writing about government overreach, privacy in the digital age, and other related topics for decades now. He’s an expert on how the internet and the surveillance state have grown up in interconnected ways.
You see, there’s what the law says the government can do when it comes to surveiling us, and then what the government wants to do. And most importantly, there’s what the government says the law says it can do, which is often exactly the opposite of what any normal person simply reading the law would think.
You’ll hear Mike explain in great detail here in this episode that we cannot — and should not — take the US government at its word when it comes to surveillance. There’s just too much history of government lawyers twisting the interpretations of simple words like “target” to expand surveillance in complicated ways — ways that usually only cause concern in legal circles, and only bubble up when there are huge controversies like whistleblower Ed Snowden’s major NSA revelations more than a decade ago.
But there’s nothing subtle or sophisticated about policymaking in the Trump era — and so with Anthropic, we’re having a very loud, very public debate about technology and surveillance in real time, on the internet, in blog posts and X rants, and over press conference sound-bites. There’s positives and negatives to that, but to make sense of it all, you really have to know the history.
That’s what Mike and I set out to explain in this episode — whatever your views on AI and government, this episode will make it clear that both parties have let the surveillance state get bigger and bigger over time. Now, we’re on the cusp of the biggest expansion yet when it comes to AI.
Okay: Techdirt founder and CEO Mike Masnick on Anthropic, the Pentagon, and AI surveillance. Here we go.
This interview has been lightly edited for length and clarity.
Mike Masnick, you’re the founder and CEO of Techdirt. Welcome to Decoder.
I’m glad to be here.
I’m excited to have you on. I was just saying I am shocked that you’ve never been on the show before. You and I have been writing and posting around each other for a long time. A lot of The Verge policy coverage owes a debt to what you’ve done at Techdirt and then what’s going on with Anthropic is so complicated, but hits so many themes that you have covered for so long. I’m glad you’re finally here.
It is a complicated mess of a topic, but I’m excited to be digging in on it.
What I want to focus on with you is not the details of whether Anthropic is going to sign a contract with the government or whether OpenAI is going to get that contract. Instead, I’m confident between the time we record this and the time people listen to it, there will have been more tweets and more things will be different than they were before.
What I want to focus on is just one of the two red lines that Anthropic has really laid out. One of them is autonomous weapons, which is its own level of complication. The law there is a little bit more nascent whether or not the weapons even exist or have already been deployed by Russia in the Ukraine War.
There are a lot of ideas here that I just want to set that aside because I think that is going to come into more focus all on its own schedule. The other red line that I do want to spend a lot of time on is mass surveillance. And there’s quite a lot of law here about mass surveillance. There’s a lot of history, a lot of controversial history. The entire character of Edward Snowden exists because of controversies around mass surveillance.
It all comes down to—I think you are the one who posted this—the National Security Agency (NSA), which is part of the Department of Defense, which we have to call the Department of War now for some reason.
[Laughs] We don’t have to do anything.
[Laughs] We don’t. That’s true here in America. We don’t have to do anything. But the NSA has basically redefined what a lot of words mean out of colloquial English to mean, “We can just do surveillance.” And then every so often there’s a scandal when people discover that they’re just doing surveillance. So just set the stage there, and I don’t want to rewind you all the way, but it’s been quite a lot of time where this pattern has repeated itself.
It depends on how deep you want to go, but the short version is obviously in the post-9/11 world, the US passed the Patriot Act, which had some ability for the government to engage in surveillance, which was supposed to be for protecting us against future terrorist threats. Over time, that got interpreted in interesting ways and there were some limits on that. We also had the FISA court, which is a special court that is supposed to review the intelligence community and their activities, but has traditionally been a one-sided court. Only one side gets to plead their case to that court and it’s all done in secret.
There’s a lot of stuff that was not known. And then there was one other piece in all of this, which goes all the way back to Ronald Reagan, which is Executive Order 12333, which is supposedly about setting out the rules of the road for intelligence collection.
So you have these three sets of laws—well, a few sets of laws—and an executive order that to the public, the parts that you can read, seem to say certain things about what our government and the NSA in particular can do in terms of surveillance. When read with a plain English dictionary, the nature of which you and I probably have and understand, we would come away with a belief that the NSA’s ability to surveil Americans was very limited, in fact to the point that they’re supposed to, if they realize that they are surveilling a US person, that they’re supposed to immediately stop and cry foul and erase the data and all of this other stuff.
There were rumors for a while that that was not really happening and there were hints and in particular Senator Ron Wyden was very vocal about going on the floor of the Senate and saying, “Something is not right here and I can’t quite tell you what,” or in hearings he would ask intelligence officials, “Are you or are you not collecting mass data on Americans?”
Those officials would either deflect or in some cases outright lie. I believe it was one hearing in 2012 with James Clapper, who was the Director of National Intelligence at the time, where he was asked directly on this point. And he basically said, “No, we don’t collect data on Americans.” That was a big part of what inspired Ed Snowden to leak the data, the reports that he leaked to Glenn Greenwald and Barton Gellman and Laura Poitras as well. From all of that, what we began to discover was that the NSA has its own dictionary that is somewhat different than the dictionary that you and I use, such that they can interpret words in ways that are different than the plain English meaning of them, including words like “target,” which feels like a key word. A broad understanding of what this is is that, in theory, they’re only supposed to target people who are not US persons, I think is the phrase.
But the way it had been interpreted over time was that anything that mentions that person, anything that is about a foreign person is now fair game, even if that is the communications of a US person. So if you and I were to text each other and mention a foreign person, that is now fair game for the NSA to collect and to keep and to store.
There’s a second part of this. I mentioned first Executive Order 12333 from Ronald Reagan, which, as the technology changed over time and the internet grew, effectively allowed the NSA to tap into foreign communications, but that included any communications that may have left the US on route somewhere. So if I’m texting you and a message went from me in California through a fiber optic cable that happened to leave the US, the NSA could put a tap in the part once it’s outside the US and collect that information, even if it was just going to you within the US.
The NSA could then keep that information even if it was on US persons, and they could do specific searches on that later, sometimes referred to as “backdoor searches.” They collected this information that we believe they weren’t supposed to collect in the first place, but they could keep it. And they promised, they pinky swore, that they would keep it private, but if they did a search and found that you or I mentioned a foreign person, then suddenly it was fair game for them to do whatever they want with it.
In total, that has turned into a world in which the federal government can basically collect any information that happens to touch outside the US. Even if it is entirely between two US persons, if they mention or even hint at someone who is not a US person, suddenly it is fair game to be collected. And from that we’ve gotten what appears to be a form of mass surveillance of US persons by an NSA that claims and publicly states that it does not spy on US persons.
How did we get to this point? This is a lot of incremental baby steps. You mentioned James Clapper in 2012, that’s the Obama administration. You mentioned Ronald Reagan, that’s the 1980s. We’re going through Democrats and Republicans here.
The war on terror happened in the George W. Bush administration, and 9/11 and the Patriot Act happened in the George W. Bush administration. There are a lot of incremental bad things under presidents of both parties, under congresses of both parties. How did this happen?
The simplest form of it is just that nobody, and certainly no president, wants to be president during the time when there’s a big terrorist attack, because that makes them look bad. Obviously they also want to protect Americans, right? That’s part of their job. If you have an intelligence community that is basically operating in darkness because that’s what intelligence communities do and they keep coming to you and saying, “Hey, if we could just get access to this information, it’d be really helpful in preventing a terrorist attack.”
There may be cases where that’s true, that the intelligence community is able to use this information in a way that works well. But we also are, in theory, a society of laws with a US Constitution that we’re supposed to obey. But that allowed for the fact that administration after administration, again, Republican and Democrat, had lawyers who were very clever and who would look through and say, “Well, if we sort of position this way or we state this way or we interpret this, that way we can get what we want and not technically break the law or not technically violate the Fourth Amendment.”
The assumption was always, “We can sort of bend the law or bend our interpretation of the law and nobody’s really ever going to see this, or nobody who cares is really ever going to see this, and therefore we’ll get away with it.”
There are two things that really jump out at me. One, you and I both read a lot of court decisions — appellate court decisions and Supreme Court decisions. And there’s a fight in our Supreme Court about how to literally interpret the words in our statutes and our laws.
I won’t get too far into it, but I would say generally the idea that you should just read the words on the page and do what they say is the dominant strain of statutory interpretation in the United States. Left or right, they both say it. They argue about some very esoteric fine points of what that actually means. But that you should just be able to read these words and do what they say, that’s not up for grabs, right?
We’ve landed on at least that first pass of what you might call textualism. How do lawyers of both administrations get this far away from the dominant mode of legal decision-making in our country? Justices of both parties both agree that that is at least the first step.
I wish I knew the exact answer, but I think it is motivated reasoning, right? As a lawyer, you are there to defend your client and the success — if you can call it success — of our legal system tends to be based on having an adversarial situation where you have different sides arguing over these things, where the role of the adjudicator is to narrow in and figure out which side is actually correct.
One of the problems with the intelligence community and the setup of it is that you don’t have that adversarial situation. That makes it easier for one side to justify the argument that they’re making because nobody is really pushing back on it. You combine that with the overarching fear of another terrorist attack, anything related to national security, and even when you have situations where you have the FISA court — I mean the FISA court was somewhat famous for effectively being a rubber stamp for many years.
I forget the exact numbers, but it was something like over 99 percent of applications that went to the FISA court to allow for surveillance of certain situations were granted, and it’s easy to say 99 percent is obviously too much. Obviously those bringing claims to the court, they’re picking and choosing. They’re not, for the most part, bringing totally crazy claims. But without that adversarial aspect and with a very strongly motivated group of people who think, “We need to do this,” or are being told by an administration, “We need to do this,” they’ll find ways to do it. And that’s where you end up over time.
Has there been anyone involved in this process who’s ever woken up and said to themselves, “Boy, we’ve managed to redefine the word ‘target’ to mean anything we want”?
[Laughs] Obviously you had Ed Snowden, who leaked a bunch of documents. You had John Napier Tye, who wrote a piece for The Washington Post in 2014, which revealed the interpretation of Executive Order 12333, and said that that’s the real issue to pay attention to. You have other people who have spoken up about these things, but for the most part, the people who are involved in working within the administration on intelligence community stuff are bought into the view of the intelligence community, which is that the overriding goal is to protect the country from something bad. The best way to do that is to have as much information as possible.
It’s easy to be sympathetic to the argument that, yes, having more information may allow them to catch something earlier or find something important, but, one, that might not be true. Getting too much information is probably just as bad as too little information because it can often hide the information that is actually useful, the information that you actually need to determine something.
But also, we have a U..S. Constitution in the first place and we have reasons why, in theory, we’re not supposed to allow for mass surveillance without probable cause. As a country that believes in the rule of law, we should be able to live up to that, and when all this stuff happens in darkness, you will tend to lose sight of that.
This brings me to Anthropic. Anthropic is primarily an enterprise company. They’re good at the government, they’ve built those muscles, they’re staffed by people who are really well versed in some of this stuff. They obviously looked at Pete Hegseth saying, “We want all lawful uses,” and they went two levels of interpretation down and said, “Well, your literal belief is that these words do not mean what they say they mean on their face. So ‘all lawful uses’ is too big, and we want to put some guardrails particularly around mass surveillance.”
Again, I’m going to bracket out autonomous weapons, which was the other red line, but particularly on mass surveillance, Dario Amodei is out there saying, “We can do too much. This is too dangerous. This is a Fourth Amendment violation.”
The tension there is “you’re saying you’re going to comply with these laws that say one thing and now, after all this time, they mean something completely different and we just don’t want to be part of that.” That’s the fight. I just want to compare that to Sam Altman, who swoops in to say, “We’ll do all lawful uses,” and then posts this long message being like, “Here are all the laws we are going to comply with.”
It seems like Altman didn’t know how the NSA had reinterpreted these things and kind of got taken for a ride. And he’s since started walking it back — even as we are recording, I’m confident there are more tweets and everyone’s positions have changed. But Altman has been walking it back slowly, but it does seem like OpenAI got roped into reading the statutes on their face and believing what they said. Is that your interpretation of events as well?
There are two possibilities, and that’s one of them. One is that he got played the same way that the public got played for many years. The alternative theory, and I have no idea which one of these is true, is that he or some of the lawyers at OpenAI — who I think are very competent and very knowledgeable — knew this, but thought that they could play the same game that the NSA played for a few decades, in that as long as they say these things and then they say the words, but they don’t reveal the actual interpretations, that they could get away with it too. So Sam comes out with the statement that makes it look like “We had the exact same red lines as Anthropic did, and the government was great with that.”
In fact, I think Sam Altman said that Anthropic had two red lines and OpenAI had three, and the government was perfectly fine with it, and that left a lot of people sort of scratching their heads. But I think it has to be either that Sam Altman and whoever was surrounding him didn’t understand how these things work in practice, or they did, and they just assumed that the public wouldn’t know and therefore they could get away with it.
The other thing that comes to mind — again, AI is new and it’s so tempting to come at new technologies as these are problems of first impressions. “No one’s ever had to think about this before,” but the reality is everyone’s been thinking about this stuff for a long time. Maybe the thing that’s new here is not AI, but that the second Trump administration, instead of doing a bunch of lawyering that maybe no one will ever read to justify their actions to a secret court that no one’s paying attention to, is instead just not that subtle.
They’re not that sophisticated and they’re just saying they’re going to spy on everybody all the time. They just announced their intentions in a way that maybe all administrations should just announce their intentions and see where the chips fall.
But I’m looking at the fact there there was Ed Snowden here in New York City. AT&T runs a building that everyone knows is an NSA building. It is just a giant building, and we’re supposed to pretend it’s not an NSA surveillance center, but it’s right there. It’s huge. None of that seems to have come to anything. All of these revelations, these leaks, we haven’t backed it off.
In fact, it’s only increased as so much of our lives have gotten more and more digital. And maybe the Trump administration being such a blunt instrument at all times, that might actually be the thing that causes the reckoning. Do you see that playing out anyway?
There are a few different things there, and it’s not entirely true that we haven’t backed off this stuff at all. The revelations from Snowden did lead to some changes within how these things happen. And there are now — I forget what they’re called, but they’re like these civil amicus people within the FISA court that will act as presenting the other side on certain issues.
And we’ve seen some of the authorities limited in certain ways, and they come up for reauthorization every so often, and activists have been very aggressive about pushing back and trying to put some more guardrails on. But to the larger question, I think there are two different things. You’re half right in that this administration is not subtle and just says out loud the things it shouldn’t.
“We’re at war with Iran, we’re doing it, it’s happening. We’re not even going to try the dance.”
In ways that all previous administrations wouldn’t do. But they haven’t really said that directly about surveillance, especially surveillance of Americans. There have been hints of it, but they haven’t come out as strongly on that. The other half of it has to do more with Anthropic’s positioning and the general view of AI as this possibly existential technology, where Anthropic has always presented itself as, “We’re the thoughtful good guys,” and whether or not you believe that is kind of besides the point. They have this reputation out there: “We’re trying to do this in a way that is safe, that respects humanity and is paying attention to all of these things.” And so when you have that clash, that’s where the struggle comes in.
You have a Trump administration that just wants to be able to do whatever it wants to be able to do, and they’re not subtle about that. And then you have Anthropic, whose self description and its public persona is always like, “We’re thoughtful and we respect humanity and rights and all of these things.” That’s probably where the clash came in, because Anthropic, as has been made clear, has worked with the Defense Department for a while and has many other contracts with the government that it has used. It hasn’t been a problem.
It was only in these specific areas where, as the government was seeking to expand the contract that it had, that the senior leadership of Anthropic began to say, “Wait, we have to make sure that we’re not crossing these red lines that would potentially harm our reputation as the thoughtful, safe AI provider.”
I want to briefly ask you about surveillance in general, and in particular Anthropic’s Fourth Amendment concern. The Fourth Amendment says the government can’t unreasonably search you. The best way to understand the Fourth Amendment is by listening to “99 Problems” by Jay-Z. So if you need to take a break and go listen to “99 Problems,” that’s great. It’s all in there. I listened to it when I was in law school and it made perfect sense.
But the government generally needs a warrant to search you. And as more and more of your life goes online, there are lots and lots of exceptions to this. But the idea is they should still need a warrant online. Anthropic’s argument is, “Well, the AI will never get tired. It can search everything all the time. That means we’re just going to do master surveillance.”
Yet even before AI showed up, the idea that the government could search everything that belonged to you was out there, the idea that the government didn’t need a warrant to search all of your stuff was out there. The idea that if any of your data ever went outside the country for a brief second, the government intercepted it there,
When I was in college, around the time of the Patriot Act, the debate was that they’re not going to search your actual data, but they can get the metadata and the metadata alone. The data about your data will be enough to precisely locate you at all times. And even that is too far. And we’ve been doing this dance of what can the government collect? What is permissible? What do they need to keep us all safe and what’s too far? Those lines have moved.
So just briefly describe the generalized concern about surveillance at the scale and where we are now. Before the AI situation made everything exponentially more complicated.
Here I have to introduce another concept that probably should have been mentioned earlier, but it is important, which is called the “third-party doctrine.” The idea with the Fourth Amendment is that the government can’t search you or your things without a warrant and it can’t get a warrant without probable cause that you’ve committed some sort of crime. But there’s this concept which came about decades ago called the third-party doctrine, which says that that doesn’t necessarily apply, or doesn’t apply at all, to things that aren’t yours, even if it is your data.
The earliest and most obvious version of this was phone records that the phone company had of who you called. The phone companies weren’t recording your calls, but they were recording if I called you, there would be a record at the phone company that says, “Mike calls Nilay.” And what had been determined by multiple courts was that the government can go and request that, and they don’t need a warrant for that because it’s not a search of your data, it’s this third party and they can agree as a third party to just hand over that data.
But those were cases from the 1960s and 70s, where it was determined that the government can get access to that without a warrant, when there wasn’t that much third-party data out there. The rise of computers and the internet changed that. Now, everything is third-party data. Everything that we do is collected by some company somewhere and has a record of it. So basically every bit of data about you, where you are, who you speak to, who you interact with, what you say, what you’re doing, all of that is pretty much held by third parties these days. So the third-party doctrine has swallowed the entire Fourth Amendment to some extent, where anything that is about you that somebody else has, there’s a much lower standard for what the government can do to request it.
Just to be specific, this means when my data is in iCloud, the government can go to Apple and get my data out of iCloud without ever telling me?
They can request it. They can easily request it without a warrant. Then the company has its own rights and can determine what they want to do with that request. They can just give it up. They can, as most of them will do, if it’s a serious request, reject requests out of hand or they can alert you and they can say — and this is what most of them will do — they’ll alert you and say, “The government is requesting some of your data. You can go to court and try to block them.” If not, they will hand over your data in seven days or whatever it might be.
Again, it depends. If it’s a criminal investigation, then there may be some sort of gag order where the company is not allowed to tell you. There are all sorts of situations, but most of them involve less than the level of protection that the Fourth Amendment would require if it was data or any information or anything in your own home.
The amount of data you have on someone else’s cloud server is massive, right? Every single thing that you do generally on the internet now is backed up in some way or recorded in some way on someone else’s servers. The government has found this way to get around the Fourth Amendment and say, “Well, that’s not actually yours. It belongs to Amazon. We can go talk to Amazon,” and Amazon has to stand in the middle of that process and say, “We’ve invented another process to somewhat protect the people.”
I look at that—and when I was covering the first third-party doctrine cases that covered the cloud services and the government kept winning, that’s basically when I turned into the Joker. I was like, “All of this stuff that we’re pretending about textualism and the plain [reading], none of this means anything because we just horsepower to the backdoor using this ancient law into everyone’s data.”
And then I look at this and I look at Anthropic and I say, “Well, this is the same pattern.” This is a private company saying, “Okay, we understand your position. We understand that you’ve reinterpreted the law to mean this thing, and we’re going to put some process in between you, our tool, and the data of Americans flowing through our service.” I am just wondering if you see that parallel there, between Anthropic and Amazon and Azure and whatever other cloud services that exist that hold so much of our data.
Yeah, though there are a few clarifications that are important here that make this a little bit different. And in fact — I think The New York Times had this reporting first — the main clause that was most important to Anthropic was specifically about data collected from commercial services and not being able to use Claude on that data, which is exactly this issue in terms of third-party data. But I do want to clarify the main difference between what we were just talking about before this with Amazon or other third parties hosting your data, those were cases where they were, because of where they sit in the ecosystem, they were hosting your data directly.
With Claude, it’s not that anyone is worried about the NSA looking through your Claude usage. It’s about them going out and getting third-party data from Amazon or more likely the sort of sneaky, hidden data brokers that serve ads on your phones and know your location and your interests and things like that. And then feeding that into a system that Claude would then work on. That’s what Anthropic really didn’t want to be a part of. So wherever or however the government would collect that data from a third party, Anthropic said, “We don’t want our tool to be used on that data.”
Apple famously stands up to the FBI asking it to put a backdoor in the iPhone, and Apple says “no,” and they stand up to Trump. And there’s a part of how our system works in which big private companies get to say “no” to the government on behalf of their customers. And this felt in the same way that Apple, again, won’t put it backdoor on the iPhone, or the big cloud providers say, “There’s a little bit of a process you have to jump through before you get the individual data.”
Here it seems like Anthropic is saying, “We’re not just going to do bulk analysis of data that you have acquired from other parties because that leads to 24/7 mass surveillance of Americans, and we don’t want to do that.” Yet that seems like a bridge too far for this administration. Is there any coming back from that?
We will see. In the past when that’s happened — and it’s happened plenty of times with most of the large tech companies, at some point they’ve said something is a bridge too far — where that normally goes is to court. The companies will go to court or the administration will go to court and there’ll be some sort of court battle.
The backdoor into the iPhone is a perfect example of that. It went to court and they fought it out, though they never quite got to a conclusion because the FBI eventually did just manually break into the iPhone and then didn’t want the court ruling to ruin that in the future.
But in this case, where the escalation is and where this is different from those past situations is that rather than just going to court, the Trump administration did this “supply-chain risk” designation, which is just insane. This idea that this tool which was designed to stop potential foreign malicious actors from supplying technology, that could then put in hidden surveillance tools into the larger technology stack, that those could be banned. To apply that to a US-based company basically for having an ethics policy feels like a real, real misuse of that tool.
Even that tool was questionable in some ways, but you could understand the impetus behind it when you’re talking about a Chinese networking firm or something along those lines. Here, it makes no sense. So the reaction to this goes so far beyond what would normally be seen in this case. You could see traditionally there would be some sort of court case and either side could start it and it would just be a battle about how the contract could be applied.
But that’s not what is happening here. This administration is effectively saying, “If you don’t give us absolutely everything that we want, if you don’t set up your tools to work the way we want them to work, then we will effectively try to destroy your entire business.” And that is an escalation.
There’s one piece of this that I want to end on, and it’s kind of the most galaxy-brain version of this. FIRE, which is a free speech advocacy group, put out a blog post just before we started recording making the argument that forcing Anthropic to build tools it doesn’t want to build is a free speech violation, that it’s something called compelled speech. There’s a lot of history here. There is some deep Verge and Techdirt, in-the-weeds, existential-crisis history here.
But it basically comes down to the idea that code is speech, that writing code for a computer is a form of speech and the government can’t force you to do it, and a whole bunch of stuff flows from that. Do you buy this argument that forcing Anthropic to build tools that it doesn’t want to build is compelled speech?
Yeah, I actually think it is fairly compelling. Compelling compelled speech. But no, I do think it is an interesting argument. It’s one that had been a little bit further down the list of the issues that I was thinking about. I was obviously mostly more focused on the Fourth Amendment issues, but I think the FIRE argument is not wrong. We have seen this in other contexts. It did come up in the backdoor issue as well, in terms of trying to build backdoors into encrypted systems.
Companies definitely raised First Amendment claims, saying, “It is compelled speech to force us to write that kind of code.” It is a valid argument. It might be, again, one that courts are probably less willing to tackle initially if they can deal with these issues some other way. But I’m glad that FIRE made that post and I think it is an interesting and compelling argument.
Yeah, it’s just the nature of the second Trump administration is that it’s such a blunt instrument, it is almost certain we will attack all of the issues all at once.
Yes, every Bill of Rights amendment has to be challenged in some form or another with every possible issue.
[Laughs] Spin the wheel.
I’m sure we can fit a Third Amendment violation somewhere in here.
Sure, yeah. Claude has to live in your house now. Exactly. It’s going to be great. We’re doing [amendments] one, three, four, and seven. We rack ’em up.
Mike, this has been great. I cannot believe you haven’t been on the show before. This has been great. You have to come back soon.
Absolutely. Whenever you want me.
Questions or comments about this episode? Hit us up at [email protected]. We really do read every email!



