Power & Market
Americans have witnessed people picketing in support of a $15 minimum wage. It’s called “The Fight for $15.” Florida citizens will even vote on embedding a $15 minimum wage in their state constitution in the November 2020 election. A minimum wage plebiscite is pending in Idaho in 2020. If the past is any indication, the prospects for the Florida Idaho initiatives are good. Of the 27 state minimum wage votes since 1988, 25 passed. The two that were rejected, Missouri and Montana, were in 1996. For the 25 that have passed, electoral margins have almost always substantial. For information about these ballot initiatives, see ballotpedia.org.
Interestingly, Floridians first put a minimum wage in their constitution in 2004 when it was set at $6.25 and indexed to inflation each year. That’s the reason Florida’s current minimum wage is $8.46, an otherwise curious number. Florida’s 2020 amendment proposes increasing Florida’s current minimum wage to $10.00 in September, 2021, and thereafter in $1.00 increments to $15.00 in September 2026.
Idaho’s plebiscite, if enacted, will raise the current $7.25 minimum wage to $12 by 2024. Moreover, and particularly important for what follows, it will eliminate a current provision that allows people under age 20 to be paid $4.25 an hour for their first 90 days on the job.
Media accounts of the “Fight for $15” are supportive. They typically offer numerical exercises showing “inadequate” earnings of people working full time at the minimum wage. The accounts are laced with terminology such as “fair” and “living” wages, a tactic surely designed to evoke public sympathy/support for those struggling economically. It also seizes the high moral ground for supporters since it puts their opponents in the untenable position of appearing to favor “unfair” and “non-living” wages. It always helps to control the terminology used in a debate, doesn’t it?
Sympathy and terminology without knowledge can be dangerous, or as the saying goes, “the road to hell can be paved with good intentions.” Or as University of Chicago Nobel Laureate in economics George Stigler once put it: “Whether one is a . . . churchman or a heathen, it is useful to know the causes and consequences of economic phenomena.”
The Source of the Problem
Whether intended or not, increases in the minimum wage doom those whose economic value to employers is between the current minimum wage and the proposed higher minimum wage. The lower rungs of peoples’ economic ladders are cut off. They lose their jobs regardless of the intentions of their supposed supporters. Their supposed supporters are actually their enemies. They need to “help” less.
It is unfortunate when we read that the person who filed Florida’s ballot initiative said: "In life, I think that you’re supposed to do the most, for the most with the least. ... I did [the ballot initiative] in a way that would be business-friendly, and not just throw them in the deep end.” Similarly, an official in the “Idahoans for a Fair Wage” says "I think this would stimulate the economy, and our goal really is to help lift working Idahoans out of poverty. We’ve found that a lot of the people that this would really help would be people pretty much 35 and under."
In truth, the bills only makes many low-wage workers now legally unemployable. This is an odd way to “stimulate the economy.” And as noted at the outset, eliminating a sub-minimum wage for workers under 20 is a way to hurt, not help, those who are less productive workers. Just because a worker has relatively low productivity is no reason to outlaw that person's job. But that is what minimum wage increases do.
- "How Minimum Wage Laws Increase Poverty" by George Reisman
- "Outlawing Jobs: The Minimum Wage" by Murray Rothbard
- "Bernie Sanders Shows Us How a Minimum Wage Hike Hurts Workers" by Ryan McMaken
Last week, Miami art gallery Art Basel sold, for $120,000, a piece of art composed of a banana duct taped to a wall. At least one other identical piece sold for a similar amount. A third piece was priced at $150,000. The banana used in the display is a real banana, and on Saturday, a performance artist named David Datuna ate some of it.
Datuna's stunt merely illustrated what everyone should have already known: the value of the artwork had almost nothing to do with the banana itself. Its value came not from the amount of labor that went into it or from the cost of the physical materials involved. A spokeswoman for the museum summed up the real source of the item's value, noting, “He [Datuna] did not destroy the artwork. The banana is the idea.”
In other words, the people who purchased the art weren't actually purchasing a banana and tape. The person who purchased the art was buying the opportunity to communicate to peers that he or she was rich enough to throw around $120,000 on a work of art that would soon cease to exist. This was a transaction that involved purchasing status in exchange for money. The banana was only a tiny part of the exchange.
Moreover, the transaction offered the opportunity for the gallery, the art seller, and the art buyer to all further increase their status by being the topic of discussion in countless news articles and discussions in social media. As was surely anticipated by the artists and everyone else in the banana sale, the media could be counted on to act as if this art was something new, outrageous, or exciting. "Art world gone mad," the New York Post announced on its front page. Hundreds of thousands of commentators in various social media forums chimed in to comment on the matter.
One wonders, however, how many times this shtick can be repeated over and over until people lose interest. Apparently: many times. After all, this sort of art is not a new thing. For decades, avant-garde artists have been using garbage and other found objects to create art. And people with a lot of disposable income have been willing to pay a lot of money for it. It's all basically an inside joke among rich people. And regular people have the same reaction over and over again.
But there's absolutely nothing at all that's shocking, confusing, or incomprehensible from the point of view of sound economics. Transactions like these should only surprise us if we're still in the thrall of faulty theories of value, such as the idea that goods and services are valued based on how much labor and materials went into them. That's not true of any good or service. And it's certainly not true of art.
Is It Garbage or Is It Art?
In fact, two identical items can be valued in two completely different ways simply if the context and description of the objects changes.
According to the Daily Mail, a 2016 study suggests that people value ordinary objects differently depending on what they are told about the objects: "According to the new research, being told that something is art automatically changes our response to it, both on a neural and a behavioural level."
In this case, researchers in Rotterdam, the Netherlands, told subjects to rate how they valued objects in photographs. When told that those objects were "art" people valued them differently.
In other words, the perceived value of objects could change without any additional labor being added to them, and without any physical changes at all.
The value, it seems, is determined by the viewer, and we're reminded of Carl Menger's trailblazing observations about value:
Value is a judgment economizing men make about the importance of the goods at their disposal for the maintenance of their lives and well-being. Hence value does not exist outside the consciousness of men.
One moment the viewer may think he's looking at garbage, which he has likely learned is of little value. When told that said junk is really "art," the entire situation changes. (Of course, we would need to see their preferences put into real action via economic exchange to know their preferences for sure.)
The change, as both Menger and Mises understood it, is brought about not by changes to the object itself, but by changes in context and in the subjective valuation of the viewer.
A glass of water's value in a parched desert is different from that of a glass next to a clean river. Indeed, a glass of water displayed in a museum as art — as in the case of Michael Craig-Martin's "An Oak Tree" — is different from water found in both deserts and along rivers. Similarly, the value of a urinal displayed in a museum as art — as with Marcel Duchamp's "Fountain" — is different from a physically identical urinal in a restroom.
The Daily Mail article attempts to tie the researchers' observations to the theories of Immanuel Kant on aesthetics. But, one need know nothing about aesthetics at all to see how this study simply shows us something about economic value: it is, to paraphrase Menger, found in the "consciousness of men."
And it is largely due to this fact that centrally planning an economy is so impossible. How can a central planner account for enormous changes in perceived value based on little more than being told something is art?
Is a glass of water best utilized on a shelf in a museum, or is it best used for drinking? Maybe water is best used for hydroelectric power? Exactly how much should be used for each purpose?
When discussing the problems of economic calculation in socialism, Mises observed that without the price system, there simply is no way to say that a specific amount of water is best used for drinking instead of being used for modern-art displays. Nor is the fact that people need water for drinking the key to determining the value of water. (See the diamond-water paradox.)
In a functioning market, consumers will engage in exchanges involving water in a way that reflects how much they prefer each use of water to other uses. At some moments, some consumers may prefer to drink it. At other moments, they may prefer to water plants with it. At still other moments, they may want to contemplate an art display composed of little more than a glass of water. The price of water at each time and place will reflect these activities.
Without these price signals, attempting to create a central plan for how each ounce of water should be used is an impossible task.
Do we need to know why people change their views of object when told they are art? We do not. Indeed, were he here, Mises would perhaps be among the first to remind us that economics need not tell us the mental processes that lead to people preferring different uses for different objects, although we can certainly hazard a guess. It's unlikely that the buyer of the taped banana bought it because he or she planned to eat it.
But even if we are wrong about the buyer's motivation, the fact remains that the buyer valued the banana at $120,000 for some reason — and the value was subjective to the buyer.
Similarly, we can't know for sure why each individual values water for drinking over "art water" or vice versa. And a government planner or regulator — it should be noted — can't know this either.
Leftist revolutionaries have long been in the habit of reworking the calendar so as it make it easier to force the population into new habits and new ways of life better suited to the revolutionaries themselves.
The French revolutionaries famously abolished the usual calendar, replacing it with a ten-day week system with three weeks in each month. The months were all renamed. Christian feast days and holidays were replaced with commemorations of plants like turnips and cauliflower.
The Soviet communists attempted major reforms to the calendar themselves. Among these was the abolition of the traditional week with its Sundays off and predictable seven-day cycles.
That experiment ultimately failed, but the Soviets did succeed in eradicating many Christian traditional holidays in a country that had been for centuries influenced by popular adherence to the Eastern Orthodox Christian religion.
Once the communists took control of the Russian state, the usual calendar of religious holidays was naturally abolished. Easter was outlawed, and during the years when weekends were removed, Easter was especially difficult to celebrate, even privately.
But perhaps the most difficult religious holiday to suppress was Christmas, and much of this is evidenced in the fact that Christmas wasn't so much abolished as replaced by a secular version with similar rituals.
Emily Tamkin writes at Foreign Policy:
Initially, the Soviets tried to replace Christmas with a more appropriate komsomol (youth communist league) related holiday, but, shockingly, this did not take. And by 1928 they had banned Christmas entirely, and Dec. 25 was a normal working day.
Then, in 1935, Josef Stalin decided, between the great famine and the Great Terror, to return a celebratory tree to Soviet children. But Soviet leaders linked the tree not to religious Christmas celebrations, but to a secular new year, which, future-oriented as it was, matched up nicely with Soviet ideology.
Ded Moroz [a Santa Claus-like figure] was brought back. He found a snow maid from folktales to provide his lovely assistant, Snegurochka. The blue, seven-pointed star that sat atop the imperial trees was replaced with a red, five-pointed star, like the one on Soviet insignia. It became a civic, celebratory holiday, one that was ritually emphasized by the ticking of the clock, champagne, the hymn of the Soviet Union, the exchange of gifts, and big parties.
In the context of these celebrations, the word "Christmas" was replaced by "winter." According to a Congressional report from 1965,
The fight against the Christian religion, which is regarded as a remnant of the bourgeois past, is one of the main aspects of the struggle to mold the new "Communist man." … the Christmas Tree has been officially abolished, Father Christmas has become Father Frost, the Christmas Tree has become the Winter Tree, the Christmas Holiday the Winter Holiday. Civil-naming ceremonies are substituted for christening and confirmation, so far without much success.
It is perhaps significant that Stalin found the Santa Claus aspect of Christmas worth preserving, and Stalin apparently calculated that a father figure bearing gifts might be useful after all.
According to a 1949 article in The Virginia Advocate,
at children’s gatherings in the holiday season … grandfather frost lectures on good Communist behavior. He customarily ends his talk with the question “to whom do we owe all the good things in our socialist society?” To which, it is said, the children chorus the reply, ‘Stalin.’
Today the Washington Post published a bombshell report titled “The Afghanistan Papers,” highlighting the degree to which the American government lied to the public about the ongoing status of the war in Afghanistan. Within the thousands of pages, consisting of internal documents, interviews, and other never-before-released intel, is a vivid depiction of a Pentagon painfully aware of the need to keep from the public the true state of the conflict and the doubts, confusion, and desperation of decision-makers spanning almost 20 years of battle.
As the report states:
The interviews, through an extensive array of voices, bring into sharp relief the core failings of the war that war is inseparable from propaganda, lies, hatred, impoverishment, cultural degradation, and moral corruption. It is the most horrific outcome of the moral and political legitimacy people are taught to grant the state. persist to this day. They underscore how three presidents — George W. Bush, Barack Obama and Donald Trump — and their military commanders have been unable to deliver on their promises to prevail in Afghanistan.
With most speaking on the assumption that their remarks would not become public, U.S. officials acknowledged that their warfighting strategies were fatally flawed and that Washington wasted enormous sums of money trying to remake Afghanistan into a modern nation....
The documents also contradict a long chorus of public statements from U.S. presidents, military commanders and diplomats who assured Americans year after year that they were making progress in Afghanistan and the war was worth fighting.
None of these conclusions surprise anyone that has been following America’s fool's errand in Afghanistan.
What makes this release noteworthy is the degree to which it shows the lengths to which Washington to knowingly deceive the public about the state of the conflict. This deception extends even to the federal government’s accounting practices. Notes the report, the “U.S. government has not carried out a comprehensive accounting of how much it has spent on the war in Afghanistan.”
As the war has dragged on, the struggle to justify America’s military presence. As the report notes:
A person identified only as a senior National Security Council official said there was constant pressure from the Obama White House and Pentagon to produce figures to show the troop surge of 2009 to 2011 was working, despite hard evidence to the contrary.
“It was impossible to create good metrics. We tried using troop numbers trained, violence levels, control of territory and none of it painted an accurate picture,” the senior NSC official told government interviewers in 2016. “The metrics were always manipulated for the duration of the war.
Making Washington’s failure in Afghanistan all the more horrific is how easily predictable it was for those who desired to see the warfare state for what it is.
In the words of Lew Rockwell, in reflecting on the anti-war legacy of Murray Rothbard:
War is inseparable from propaganda, lies, hatred, impoverishment, cultural degradation, and moral corruption. It is the most horrific outcome of the moral and political legitimacy people are taught to grant the state.
On this note, it is important to note that the significance of the Washington Post’s report should not distract from another major story that has largely been ignored by mainstream news outlets.
Recently, multiple inspectors with the Organisation for the Prohibition of Chemical Weapons have come forward claiming that relevant evidence related to their analysis of the reported 2017 chemical gas attack in Syria. As Counterpunch.org has reported:
Assessing the damage to the cylinder casings and to the roofs, the inspectors considered the hypothesis that the cylinders had been dropped from Syrian government helicopters, as the rebels claimed. All but one member of the team concurred with Henderson in concluding that there was a higher probability that the cylinders had been placed manually. Henderson did not go so far as to suggest that opposition activists on the ground had staged the incident, but this inference could be drawn. Nevertheless Henderson’s findings were not mentioned in the published OPCW report.
The staging scenario has long been promoted by the Syrian government and its Russian protectors, though without producing evidence. By contrast Henderson and the new whistleblower appear to be completely non-political scientists who worked for the OPCW for many years and would not have been sent to Douma if they had strong political views. They feel dismayed that professional conclusions have been set aside so as to favour the agenda of certain states.
At the time, those who dared question the official narrative about the attack - including Rep. Tulsi Gabbard, Rep. Thomas Massie, and Fox News’s Tucker Carlson - were derided for being conspiracy theorists by many of the same Serious People who not only bought the Pentagon’s lies about Afghanistan but also the justifications for the Iraq War.
Once again we are reminded of the wise words of George Orwell, “truth is treason in an empire of lies."
These attacks promoted as justification for America to escalate its military engagement in the country, with the beltway consensus lobbying President Trump to reverse his administration's policy of pivoting away from the Obama-era mission of toppling the Assad regime. While Trump did respond with a limited missile attack, the administration rejected the more militant proposals promoted by some of its more hawkish voices, such as then-UN Ambassador Nikki Haley.
In a better timeline, the ability of someone like Rep. Gabbard to see through what increasingly looks like another attempt to lie America into war would warrant increased support in her ongoing presidential campaign.
Instead, we are likely to continue to see those that advocate peace attacked by the bipartisan consensus that provides cover for continued, reckless military action abroad.
We usually think of Friedrich Hayek as a moderate, as least when compared with Mises and Rothbard, but he had a radical side as well. Hidden away in a note to the third volume of Law, Legislation, and Liberty, he makes a comment that puts him far outside “respectable” public opinion. He says that the inventor of “freedom from want” was “the greatest of modern demagogues.” Hayek’s condemnation of Franklin Roosevelt is as forthright as any radical could wish.
The passage where he says that is this: ”In view of the latest trick of the Left to turn the old liberal tradition of human rights in the sense of limits to the powers both of government and of other persons over the individual into positive claims for particular benefits (like the 'freedom from want' invented by the greatest of modern demagogues) it should be stressed here that in a society of free men the goals of collective action can always only aim to provide opportunities for unknown people, means of which anyone can avail himself for his purposes, but no concrete national goals which anyone is obliged to serve. The aim of policy should be to give all a better chance to find a position which in turn gives each a good chance of achieving his ends than they would otherwise have.” (Law, Legislation, and Liberty, Volume 3, note 42, pp.202-203 in the one-volume edition of the trilogy published by Routledge, 1982)
We are repeatedly told that basic human rituals are falling by the wayside. Why don't we all sit down to dinner as a family anymore? Why don't we spend time with each other anymore? Why are we all sleep deprived?
Sometimes these problems are blamed on people spending too much time devoted to kids' intramural activities or other types of school- and recreation-based activities. Some analysts note people can't tear themselves away from their smart phones in order to go to bed at a decent hour.
But very often, we're told, this lack of time comes down to too much work. The articles covering these topics are full of anecdotal evidence of people with multiple jobs, long commutes, and crushing work responsibilities.
These problems no doubt afflict many people. They're certainly an issue for people at that state of life where couples have school-age children, and have a host of bills from many responsibilities that comes with raising a family.
But, the anecdotal evidence is contradicted by years of data showing people aren't nearly as hard pressed for a few free moments as is supposed.
Specifically, consider the 2019 Q1 data provided on media consumption by the Neilsen Company. According to their extensive sampling of TV, smart phone, and video game console users, American adults spend an average of four-and-a-half hours per day watching television. The spend an additional 54 minutes using TV-connected devices such as DVD players and video game consoles.
People over fifty watch the most television and generally consume the most screen-based media. People in the 50-64 age bracket watched nearly six hours of television, and spend an additional two hours and forty-seven minutes on smart phones. People in the over-65 category watched even more television than that.
Not surprisingly, people in the 18-34 age group consumed the least media overall, and also used televisions the least. Those people have younger children — which makes TV viewing harder — and may be spending more time outside the house with friends. In this group, people watched on average one hour and fifty-four minutes of television, but were on phone apps for three-and-a-half hours.
Across age groups, media consumption ranged from nine hours to nearly thirteen hours. Per day.
But to err on the conservative side, let's remove radio time — which could just be part of the daily commute — and "internet on a computer," which could be chores and work time. Even if we do this, we find Americans are on average watching videos, playing video games, and consuming media seven or eight hours per day.
And yet, media outlets and pundits are often telling us that ordinary people absolutely don't have time to prepare a meal or maintain friendships. Given the data here, I'm skeptical of these assertions.
Now, these are averages, so it may be that people are very squeezed for time during the week, but then consume enormous amounts of media on the weekends. Certainly, there are people out there who consume live sports programming virtually all day on Sunday during football seasons. But then that would imply these people at least have time to spend with friends and family on weekends.
But if people have more than seven hours per day on average to watch re-runs of Friends, watch in-depth analysis of NBA games, and fire up the Playstation, why can't they manage to get eight hours of sleep?
If this data is correct, then the anecdotal evidence just doesn't add up, and it's simply not the case that people don't have time to do anything other than work, eat some fast food, and then do it all over again.
This isn't to say that poverty doesn't exist or that everyone is more or less average. We've all encountered people who at least sometimes work multiple jobs or are pushed to their limits by family obligations, work, and medical problems.
But the statistical data on media consumption suggests this isn't the typical experience.
Today is an interesting milestone for libertarian-minded people, as well as those with a fondness for trivia.
86 years ago today FDR 86’d prohibition.
Drinking became a crime starting on January 17th 1920, and remained a crime until December 5th 1933. Prohibition serves as a leading example of what happens when people in a largely free society lose part of their freedom. Prohibition did not stop Americans from drinking, it just drove an industry underground and into the control of gangs. Consequently, gang violence escalated during the prohibition years.
Prohibition also escalated police raids against harmless commerce. Prohibition fueled speakeasies as dispensers of beer & booze. Speakeasies obviously dealt with violent gangs as suppliers, but speakeasy customers engaged in voluntary transactions for desired goods. Police raids on speakeasies drove willing customers out of these businesses now and then, and these raids prompted both corruption and a minor change in the English language.
One speakeasy was “Chumley’s” located at 86 Bedford Street in Manhattan. Some police acted as informants to the bartenders at Chumley’s: shortly before a raid they would call with the message to “86 the customers”, to stop business and push all customers out the door. Hence the term 86’d began as a term for putting a stop to illicit business in one bar, but developed subsequently into a more general term for getting rid of something or refusing service. Prohibition ended 86 years ago today.
This is perhaps the only day during any year that libertarian-minded person might find it appropriate to raise a toast to FDR.
Cheers to the 32nd President, for just this one occasion.
A few months ago, just after Boris Johnson had become Prime Minister, I wrote an article addressing the ongoing selection process for the next Governor of the Bank of England, in which I gave my prediction of who the top 5 most likely candidates might be.
Much has changed in the British political landscape since then, including the decision to hold a general election on 12th December. As a result, Chancellor of the Exchequer Sajid Javid has announced that he will not be making his selection for the next BoE Governor until after the election, to avoid compromising the Bank’s independence by announcing during a “politically sensitive” time.
However, an official shortlist has been delivered to the Treasury, and the 5 names “thought to be” on the shortlist, as reported by the Grauniad, are Andrew Bailey, Minouche Shafik, and Ben Broadbent (who were on my predicted shortlist) as well as Shriti Vadera and Jon Cunliffe (who were not).
I included Vadera as an “honourable mention” in my article, but am admittedly surprised she made it onto the official shortlist, given her reputed “fiery” management style and strong partisan links to the Labour Party. However, bearing in mind the government’s stated intention to make this a diverse hiring process, and their use of the headhunting firm Sapphire Partners which “specialises in diversity and placing women in top roles”, it makes sense that they would have wanted to include her, at least to avoid the shortlist being 80% pale, male, and frail.
Everyone seems to be surprised that former Reserve Bank of India Governor and central banking superstar Raghuram Rajan was not included on the list, having previously been second only to Andrew Bailey in the bookies’ estimations. It’s perfectly true, as has been pointed out, that his failure to be included on the shortlist (or even interviewed) doesn’t necessarily mean he’s out of the race; current Governor Mark Carney was not included in the shortlist to replace Mervyn King in 2013. However, I have long had my doubts about the likelihood of Rajan getting the job, mainly due to the simple fact that (through no fault of his own, mind you) he isn’t British. I imagine this wouldn’t be such an issue in normal circumstances, but current Governor Mark Carney is the first of the Bank’s 120 Governors to have been foreign, and his tenure has been marked by repeated accusations of insufficient familiarity with the British economy. So it’s easy to imagine the pressure that must exist to not give the job to a second full-blown foreigner in a row.
I say “full-blown” foreigner to distinguish Rajan from the person who I personally believe is most likely to get the job, Egyptian-born Nemat “Minouche” Shafik. As I mentioned in my original article, Shafik’s status as a woman of colour would tick all the diversity boxes the government could reasonably hope for, yet she has sufficient “insider status”, both within the British economy and the Bank of England itself, to shield her from the sort of criticism to which Rajan might be subjected, and to be a considerable advantage in its own right. Educated at Oxford and the London School of Economics (two damn fine institutions, in my own entirely unbiased opinion), Shafik is the current Director of the latter institution, and was formerly a Deputy Governor at the Bank of England, having sat on its rate-setting Monetary Policy Committee from mid-2014 to early-2017. During her tenure, Shafik typically voted with the rest of the MPC, making it difficult to isolate her personal views on monetary policy. The only factor I can imagine holding her back would be her reportedly difficult working relationship with current Governor Mark Carney. However, if I were a betting man the research for my original article would have led me to bet on Shafik, and that remains true now that the official shortlist is out.
The only character on the list who I didn’t mention in my original article is Sir Jon Cunliffe, who is currently the Bank’s Deputy Governor for Financial Stability. Cunliffe has held a wide variety of senior civil service positions since 1990, and is currently on the Bank’s Financial Policy and Monetary Policy committees. He was educated at the University of Manchester, with his highest degree being a Master’s in English Literature, which, more than anything else, illustrates Britain’s unique status as a country where you can work in the financial sector with a degree in almost any subject. Cunliffe recently made headlines when he gave a speech arguing that low long-term interest rates put pressure on financial stability, and risk more severe downturns; a potentially welcome sentiment for Austrian ears.
When Mark Carney’s term as Governor comes to an end in late-January, the situation in British politics could potentially be very different: either the Conservatives will win the election and pass Johnson’s EU withdrawal bill, in which case Britain will be out of the EU by February, or Jeremy Corbyn will be Prime Minister, Brexit will be delayed (potentially indefinitely), and this shortlist of candidates might be re-thought or thrown out entirely. For the time being however, this shortlist provides an interesting insight into the priorities and policy goals of Britain’s government and central but doesn’t provide much hope for Austrians.
The 50-year US war on drugs has been a total failure, with hundreds of billions of dollars flushed down the drain and our civil liberties whittled away fighting a war that cannot be won. The 20 year “war on terror” has likewise been a gigantic US government disaster: hundreds of billions wasted, civil liberties scorched, and a world far more dangerous than when this war was launched after 9/11.
So what to do about two of the greatest policy failures in US history? According to President Trump and many in Washington, the answer is to combine them!
Last week Trump declared that, in light of an attack last month on US tourists in Mexico, he would be designating Mexican drug cartels as foreign terrorist organizations. Asked if he would send in drones to attack targets in Mexico, he responded, “I don't want to say what I'm going to do, but they will be designated.” The Mexican president was quick to pour cold water on the idea of US drones taking out Mexican targets, responding to Trump’s threats saying “cooperation, yes; interventionism, no.”
Trump is not alone in drawing the wrong conclusions from the increasing violence coming from the drug cartels south of the border. A group of US Senators sent a letter to Secretary of State Mike Pompeo urging that the US slap sanctions on the drug cartels in response to the killing of Americans.
Do these Senators really believe that facing US sanctions these drug cartels will close down and move into legitimate activities? Sanctions don’t work against countries and they sure won’t work against drug cartels.
A recent editorial in the conservative Federalist publication urges President Trump to launch “unilateral, no-permission special forces raids” into Mexico like the US did into Pakistan to fight ISIS and al-Qaeda!
I am sure the military-industrial complex loves this idea! Another big war to keep Washington rich at the expense of the rest of us. And the 2001 Authorization for the Use of Military Force can even be trotted out to fight this brand new “terror war”!
Perhaps unintentionally, however, this sudden push to look at the Mexican drug cartels as we did ISIS and al-Qaeda does make sense. After all, the rise of the drug cartels and the rise of the terror cartels have both been due to bad US policy. It was the US invasion of Iraq based on neocon lies that led to the creation of ISIS and expansion of al-Qaeda in the Middle East and it was the US war on drugs that led to the rise of the drug cartels in Mexico.
Here’s another suggestion: maybe instead of doing the same things that do not work we might look at the actual cause of the problems. The US war on drugs makes drugs enormously profitable to Mexican suppliers eager to satisfy a ravenous US market. A study last year by the CATO Institute found that with the steady decriminalization and legalization of marijuana across the United States, the average US Border Patrol agent seized 78 percent less marijuana in fiscal year 2018 than in FY 2013.
Instead of declaring war on Mexico, perhaps the answer to the drug cartel problem is to take away their incentives by ending the war on drugs. Why not try something that actually works?