Slavery And Freedom: An Interpretation Of The Old South By James Oakes

 

Hey everyone, I hope you’re doing great.  This blog is the first installment of a five part study on the history of slavery in the U.S.

James Oakes is a distinguished American history professor at the Graduate Center, City University of New York.  He’s written extensively about slavery, abolition, radical Republicans and the Civil War.  Slavery And Freedom is a summary of the consequences that slavery and slave laws had on the antebellum South.

Legal Outsiders

Oakes begins by describing and defining terms.  Slavery began with an act of violence, an act of human theft in which someone was captured involuntarily and sold or traded to someone else.  Slavery is the complete denial of freedom, slave state legislators used strict legal terms that were antithetical to human rights and freedoms to define the peculiar institution.

Oakes wrote about how honor, respect and reputation come from one’s peers and colleagues.  And how slavery was a suspended death sentence, it carried a sense of dishonor through out the South: it rendered slaves powerlessness and humiliated.  This is because in honor-societies, high standards of honor go hand in hand with great dishonor.

And slaves were greatly dishonored given they were legally exclusion from southern society.  As perpetual outsiders, they were denied the basic assumptions liberal capitalism promised to workers, promises like owning property, the right to the fruits of one’s labor and to not work if they didn’t want to.  Dishonor pervaded into all parts of a slave’s life: enslavers frequently auctioned, sold, whipped, tortured, raped and castrated slaves.

Slavery was so pervasive, it destroyed entire families and communities.  Everything the slaves produced, every task they performed and even their physical bodies were controlled by their owner.  For example, slaves couldn’t make contracts, own property or form legal partnerships.  And because enslavers controlled every aspect of the slave’s life, right down to marriages, work locations, sleep and work schedules, diets and clothing, enslavers defined what it meant to be unfree.  But for the slave system to function at all, enslavers had to learn to not interfere too deeply in to the personal lives of slaves.  If they didn’t, slaves would resist the enslaver’s demands more and more according to historian Eugene Genovese.  It’s ironic then, that slaves were so excluded from laws in the South given they were so essential to the southern economy.

But slaves resisted the enslavers from the beginning and enslavers created and enforced rules to counter the slave resistance.  Louisiana’s slave code of 1824 read “the slave is incapable of making any kind of contract…All that a slave possesses belongs to his master; he possesses nothing of his own…The slave is incapable of exercising any public office or private trust; he cannot be tutor, curator, executor, nor attorney; he cannot be a witness in either civil or criminal matters.  He cannot be a party in any civil action, either as plaintiff or defendant, except when he has to claim or prove his freedom…Slaves cannot marry without the consent of their masters, and their marriages do not produce any of the civil effects which result from such contract.”

Alabama’s slave code of 1852 read “No master, overseer, or other person having the charge of a slave, must permit such slave to hire himself to another person, or to hire his own time, or to go at large…No slave must go beyond the limits of the plantation on which he resides, without a pass…No slave can keep or carry a gun…No slave can own property…Not more than five male slaves shall assemble together at any place off the plantation.”

There was no greater humiliation than to be denied a relationship to one’s family.  Slave marriages weren’t legally recognized in slave states; marriage after all changed how one’s estate was bequeathed, it also effected how children were raised and how someone related to society in general.  But slave marriages were different: enslavers frequently sold slaves and dissolved slave marriages and families, slaves had no legal siblings, parents, spouses or ancestors.  As many as one-third of all slave families, or some 600,000 were broken up by their owners between 1820 and 1860.  The most likely time a slave could expect their family to be broken up was after an enslaver died and their estate was divided.

But what made slave families unique was how strong their family relationships were in spite of southern dishonor and legal kinlessness.  Fewer slaves ran away when familial and community relationships were strong, most runaway slaves were unmarried young men without children.  Families were the center of the slave community’s resistance to slavery.

A Brief World History Of Slavery

Oakes noted that there were five genuine slave societies in world history in which slaves made up between one fourth and one half of the total population; Ancient Greece, Ancient Rome, Brazil, the Caribbean and the American South.  But the American South was by far the worst culprit of slavery in the history of the world.  Slaves made up 49.1 percent of the total southern population with 3,950,511 slaves out of 8,036,700 in the census of 1860.  Ancient Rome’s slave population was about half the size of the South’s, Brazil’s slave population was less than half the size of Rome’s.

And through out world history, slaves emancipated themselves, were sometimes emancipated by military force or manumitted after so many years of indentured service.  Military emancipation happened in the Peloponnesian WarRoman Civil War and American Civil War.  Military emancipation also happened when British troops freed thousands of slaves in the American Revolution and the War of 1812.

Aristotle, Justinian, Thomas Jefferson and John Locke wrote extensively about slavery.  Plato questioned the morality of slavery when Greeks to enslaved other Greeks.  Karl Marx castigated drivers for over-working slaves.  Jean Jaques Rousseau wrote about how slave owners interests only serve themselves.

Oakes noted that Christians sometimes enslaved other Christians and Muslims sometimes enslaved other Muslims.  And slaves were sometimes called non-believers after they were captured to ensure greater obedience to authority among the non-enslaved community.

Intersection Of Capitalism And Slavery

Slavery wasn’t a permanent life-long status in Greece, Rome, Brazil or the Caribbean but it was for the most part permanent in the American South.  Why was slavery so permanent in the U.S.?  Oakes’s answer to this question was there were enormous profits to be made in the American cotton industry.  After antiquity, slavery made a comeback and expanded significantly during in the 17th, 18th and 19th centuries when European consumer demand for commodities like cocoa, coffee, rice, sugar and tobacco grew exponentially.  Essentially, slavery returned because capitalism spread through out the world from 1650 to 1888.

To get an idea of how dependent Southerners, Northerners and Europeans were on slavery just try to imagine what the South would have been like without slavery in the first place.  Oakes wrote slavery was the engine of southern society, without it the entire social structure would have collapsed.  Slavery was how nearly all wealth was created, it was the distinction between the South and North, it’s what made the South southern.

It is doubtful a slave economy could have sustained an industrial revolution mostly because slavery lacked incentives for people to innovate.  Instead slavery encouraged resentment and backwardness.  In the end, universal human rights and the dynamic force of free labor ultimately overwhelmed and destroyed slavery.

Enslavers, Yeoman And Slaves

Before abolition, the Old South consisted of enslavers and drivers, non-slave owning yeomen farmers and slaves.  In slave economies, wealth typically concentrates among enslavers more than it does among non-slave owners and the former group regularly displaces the latter group from their farms.

Oakes asked, who owned slaves?  His answer was the wealthiest 10 percent of owners, or some 2 or 3 percent of southern men, owned 50 percent of all the slaves in the South.  The majority of owners owned less than 5 slaves and only a quarter of owners owned more than 10.

Slave ownership effected the enslaver’s family dynamics: it replaced child labor and enhanced the education of the enslaver’s children: the literacy rate among the slave owning class was one of the highest in the western world.  Slavery also reduced fertility rates, the more slaves an owner owned the fewer children they had.  But slavery also encouraged soil exhaustion and required planters to seek expand slavery into more arable lands in the west.

Who were the yeomen?  Well, yeomen were farmers and craftsmen, they valued community, independence, individualism and tradition.  They detested aristocrats, corruption and government officials.  They lived in the highlands, mountainous regions, had poorer quality soils and received less rainfall than the delta plantations and were incapable of producing high yield crops like cotton.  Furthermore, yeoman farms generally lacked the access to public infrastructure, waterways, canals and markets that plantations did.

Yeoman farm families had higher fertility rates than enslaver-families.  Labor was more evenly divided among yeoman sons and daughters, parents exercised greater control in how estates were distributed but yeoman families were also more dependent upon the labor of their children than enslavers were.  As yeoman children matured, they demanded their own land to farm.  This created a land crisis given land was already scarce and enslavers owned more and more of it with time.

Plantations on the other hand were large and expanded from lowland river deltas up into the countryside and highlands.  Plantations were located mostly near the best soils, between one half to two-thirds of all plantations had slaves working on them, sometimes up to 90 percent of the local population was enslaved.  Oakes wrote about how enslavers could be separated from their land and still create a profit: they frequently hired out slaves to work for other planters or they moved them.  And moving was so common among the enslavers, only 20 percent of them lived on the same plantation 20 years later.

Slavery gentrified real estate and caused local property values and taxes to increase beyond the means of the yeoman.  This process threatened the yeomen’s existence which festered into political resentment in the antebellum South.  Yeomen farmers began to realign their political interests with northerners more than with enslavers, many yeomen fought for the Union during the Civil War.  Throughout history, when non-slave owning farmers were displaced by slave owners, they joined an army: this happened in Ancient Rome before the Roman Civil War just as it happened in the South before the American Civil War.

Yeomen owned fewer slaves than delta-planters but just because slave ownership in the highlands was rare doesn’t mean they weren’t racist.  In fact, many southern and northern Americans were racist in the 19th century.  Many favored a Herrenvolk type of democracy instead of a planter-aristocracy and slaves were considered the mud-sills of society as Senator James Hammond of South Carolina put it.  Oakes wrote how racism came after slavery to reinforce beliefs about slavery-capitalism and perpetuate the system.

Case Study: Slave State Laws

Slaves were usually emancipated and were made naturalized citizens in other slave-societies.  But racism was such a big part of slavery and capitalism in the American South, racism pervaded into the legal system, the worst examples were of course the U.S. Supreme Court cases Prigg v. Pennsylvania of 1842 and Scott v. Sanford of 1857.

The slave resistance began to manifest itself in state slave law cases.  With each new trial, a slave’s right to life might be marginally extended and the balance of power between the state, enslavers and slaves might be redefined.  And each new case challenged the legitimacy of slavery itself.

One of the worst state slave law decisions came about in North Carolina in the late 1820s.  John Mann, an enslaver had rented Lydia, a slave in Chowan County, North Carolina for one year.  It’s unknown why Mann whipped Lydia but when she tried to escape, he shot and wounded her.

North Carolina authorities believed Mann’s shot was disproportionate to Lydia’s escape attempt and he was charged with assault and battery.  The jury ruled against Mann in the criminal trial however he appealed, claiming that an enslaver couldn’t be charged with assault on a slave because slaves were the property of the enslaver.

Judge Thomas Ruffin of the North Carolina Supreme Court ruled slaves had no rights in State v. Mann in 1829.  Ruffin wrote enslavers had “full dominion of the owner over the slave…the power of the master must be absolute, to render the submission of the slave perfect.”  But in State v. Negro Will, the North Carolina Supreme Court decided that if a slave killed their overseer or owner in self-defense, the homicide would be considered manslaughter, not murder.

Will was a slave who fought Allen, his foreman over the possession of a hoe in Edgecombe County, North Carolina in January, 1834.  Will broke the hoe and ran to work at a nearby cotton mill.  Richard Baxter, the overseer heard of the fight, grabbed his gun, mounted his horse and told Allen to bring his whip.  Will attempted to run away and Baxter shot Will in the back.  Will continued to run but Baxter caught him and the two wrestled to the ground where Will stabbed Baxter with a knife in the arm, killing him.  Will was tried and found guilty of first degree murder by the Edgecombe County Superior Court and was sentenced to die.

Will’s owner, James Battle investigated the matter and believed Will acted in self-defense.  Battle appealed the case to the state supreme court which reversed Will’s conviction.  Judge William Gaston concluded that if the homicide had been committed by a free man upon another free man, it would have been no more severe than a manslaughter charge and that the homicide lacked the premeditation required of a first degree murder.

Judge Gaston’s decision was consequential when he correctly overturned Judge Ruffin’s State v. Mann decision.  Gaston recognized the slave’s right to personal security and limited an enslaver’s power in State v. Will.  Abolitionists, newspapers, law journals applauded Gaston’s decision.

These cases illustrate the slaves’ resistance to the enslavers demands but they also demonstrate how arbitrary was the power of enslavers in the first place.  Slaves after all were just as important, if not more important as the crops they produced and the soil they tilled.  Without the slaves, cotton wouldn’t get picked, corn wouldn’t get shucked.  It was in the enslaver’s interest to treat the slaves as humanely as possible.

Because slave laws presented so many legal questions, authorities found it necessary to empower slaves with rights before they could restrict the power of enslavers.  Slave state legislatures began to re-write their slave codes, Alabama’s slave code read “The master must treat his slave with humanity and must not inflict upon him any cruel punishment; he must provide him with a sufficiency of healthy food and necessary clothing; cause him to be properly attended during sickness, and provide for his necessary wants in old age.”  Southern legislators and judges limited the punishments permitted to an enslaver; whipping, castration and ear cropping were gradually eliminated from slave codes, capital offenses were reduced and public executions and were prohibited.

But this presented an enforcement problem for slave states, specifically how could a slave state force an enslaver to treat slaves–who were legal outsiders–humanely when enslavers wielded such great power?  Where did state power end and enslaver power begin?  Authorities had to assume enslavers used self discipline and complied with these laws, making these laws moot and unenforceable.  Furthermore, it became difficult if not impossible to prosecute these cases given slaves were denied legal personalities and rights such as self defense, trial by jury, bearing witness, testifying against an enslaver and legal due process.

And what happened to Judge Ruffin? Well, he continued to issue more slave case law decisions and even agreed with Judge Gaston.  In 1839, he wrote an enslaver’s “authority is not altogether unlimited. He must not kill. There is, at the least, this restriction upon his power: he must stop short of taking life” in the North Carolina Supreme Court case State v. Hoover.  After so much legal reform, slavery was still a brutal system of oppression and theft: slaves hardly had any of the same rights enslavers had by 1860.

Case Study: Intersection Of Federal Laws And State Slave Laws

The South dominated the White House for the first 75 years of the republic by utilizing the Three-Fifths Compromise and Fugitive Slave Clause.  However, problems with slavery began to arise at the federal level when the Census of 1840 revealed immigration made the northern population grow much faster than in the South.  This count yielded more seats in the House of Representatives to northern states, making the Three-Fifths Compromise less consequential for the South than it had been before 1840.  This bolstered the slave resistance in the South and abolition movement in the North.

Of course non-slave states in the North had relied on personal liberty laws which were founded on the Somerset principle.  Lord Mansfield wrote in the Somerset case in 1772 that “no master ever was allowed here (in England) to take a slave by force to be sold abroad because he deserted from his service…therefore the man must be discharged.”

Debate about which government, state or federal, had supreme authority began with the Nullification Crisis in 1832, it was exacerbated by northern states’ rights and the Fugitive Slave Act of 1793.  The specific question was how could northern states’ rights and personal liberty laws be respected if slave catchers were allowed to capture and return runaways to the South?  Its as if personal liberty laws in the North were moot and all states were slave states, especially after the Prigg v. Pennsylvania decision of 1842.  Debate escalated with the Fugitive Slave Act of 1850 which required common citizens to assist in the recovery of fugitive slaves, denied a fugitive’s right to a trial and designated enforcement to federal commissioners and officials.

At the heart of the debate was southern fear over the slave resistance.  Southerners worried the resistance could easily become a revolt if the slaves were provoked.  And the South would of course suffer the most from a slave revolt given almost 90 percent of the African-American population lived in the South on the eve of the Civil War.

The Meaning Of The Civil War

Secession led to Civil War in April, 1861.  Thousands of slaves emancipated themselves and ran to Union lines.  Congress, led by radical republicans determined to win the war for reunion and incrementally destroy slavery ordered the Union Army to not return runaway slaves in March, 1862 which weakened the Fugitive Slave Act.  Congress also passed the first and second Confiscation Acts in August, 1861 and July, 1862 respectively.  The former law authorized Union Armies to seize Confederate property and free the slaves but in hindsight this was unenforceable given it was limited to the Confederacy.  The latter law freed slaves of civilian and Confederate military officials in areas occupied by the Union Army in the South which increased the flow of self emancipators.

Even after the Emancipation Proclamation, slavery was so persistent that during the Civil War that only about 14 percent of the entire enslaved population, or some 550,000 emancipated themselves.  And of that number, roughly 179,000 fought for the Union Army and Navy.  Freedmen made up approximately 9 percent of the entire military during the Civil War.

Revisionist historians sometimes debate about what the Civil War was really about.  Was it about slavery, freedom, state’s rights, tariffs?  Oakes made it clear, the slaves knew the Civil War was about them: the slaves had always resisted the enslavers, half a million slaves emancipated themselves during the Civil War.  Emancipation led to revolt which changed the meaning of the Civil War from a war fought for reunion to a war fought for freedom.

Slavery and Freedom is one of the best books I’ve ever read on slavery in the U.S.  Oakes told an amazing story here, I’d encourage anyone with an interest in the subject to pick it up.  Thanks for reading.

Advertisements

Battles For Freedom: The Use And Abuse Of American History

Hey everyone, I hope you’re doing great.  Eric Foner was a professor of U.S. history at the  Columbia University, he wrote several books about 19th century U.S. political history, about Abraham Lincoln, the Civil War, the Republican Party and Reconstruction.

His latest book, Battles For Freedom: The Use And Abuse Of American History is a collection of twenty-seven essays published in The Nation between 1977 and 2017.  Foner covers several U.S. history topics in these essays, topics like;

  • How race, class and immigration status intersect in the U.S.
  • The xenophobic sentiment behind the Sacco-Vanzetti trial.
  • The international cry for equal justice in the Scottsboro Boys trial.
  • The similarities and differences of Presidents Lincoln and Obama.
  • How the modern Republican Party is no longer the party of Lincoln.
  • How some southerners walked out of the Democratic Party Convention in 1948 and became southern Republicans 20 years later.
  • The differences between docudramas, historical fiction and revisionist history.
  • Praise for late historians Howard Zinn and Eric Hobsbawm who espoused a critical perspective of institutions.
  • His experience teaching critical U.S. history courses.

After reading these essays, I noted how often Foner elaborated on revolutions and how they’re never complete: they take time to fully develop and require popular support.  Furthermore, a politician can’t bring about great social change by themselves, great social change has to come from a popular movement of the people, the charismatic politician motivates the popular movement to achieve their desires.  Finally, Foner wrote about how being the smartest guy in the room might make a president good but welcoming criticism makes a president great.

This was an easy and sweet read at 220 pages.  Foner is my favorite historian, I’d encourage anyone interested in the struggle for equality to read it.

The Limits Of “Hard” Work

Hey everyone, I hope you’re doing great.  You might have heard someone say, “I got to where I am in life with a little hard work.”  Or “if someone wants to live a high quality life, all they have to do is work hard for it.”

It sounds like the word “hard” should be objective here, it should mean the same thing to everyone.  Also “hard” sounds like a qualifier intended to separate “hard” workers from those who don’t work as “hard” and the unemployed.

But critical thinkers should be able to recognize this faulty type thinking.  Separating people from each other is the classic divide and conquer management strategy.  And “hard” work is mythical type thinking that seems pollyanna to me at least; “hard” is subjective to individual interpretation, it doesn’t mean the same thing to everyone.

A job might be easier to learn for some workers and more difficult to learn for others.  And not all jobs are easy and not all workers are the same which partly explains why compensation might vary so much.

In fact, it seems like the “harder” a job is, the more likely it is to be replaced by automation since given “hard” implies it is difficult, the labor costs are expensive and there are probably few qualified employees willing and able to do it.  So let’s look at the status of jobs, income, hours worked and economic inequality in the U.S. to find out if “hard” work is really all that it’s cracked up to be.  Ready, go!

U.S. Job Status

The status of jobs in the U.S. isn’t great: overall jobs are limited and scarce.  The workforce participation was just 62.8 percent in June, 2017, the same rate as it was in March, 1978.  Workforce participation peaked at 67.3 percent in April, 2000 during the dot-com boom.

To be fair, the population grew by 103 million people or 46 percent from 1978 to 2017.  But a growing population shouldn’t effect the decrease in workforce participation, the rate should remain the same if all factors remained same.  We don’t live in a vacuum however and the most likely culprit for the workforce participation decline is technology which regularly replaces labor expenses with automation.

Let’s note here that technology is a virtuous and good thing.  Technology makes us more productive and helps us live higher quality lives in the long term but it also displaces workers in the short term.  Technology presents a challenge for law makers and employers: workers who were displaced by innovation have to be retrained to re-enter the workforce.  Otherwise, workers displaced by technology could become discouraged or economic nationalists.

Furthermore, jobs are changing in the U.S.  Last year, economists at the National Bureau of Economic Research found a significant increase in alternative work arrangements from 2005 to 2015.

“The percentage of workers engaged in alternative work arrangements – defined as temporary help agency workers, on-call workers, contract workers, and independent contractors or freelancers – rose from 10.1 percent in February 2005 to 15.8 percent in late 2015. The percentage of workers hired out through contract companies showed the sharpest rise increasing from 0.6 percent in 2005 to 3.1 percent in 2015. Workers who provide services through online intermediaries, such as Uber or Task Rabbit, accounted for 0.5 percent of all workers in 2015. About twice as many workers selling goods or services directly to customers reported finding customers through offline intermediaries than through online intermediaries.”

This means job growth has been a wash over the previous decade.  The economists found 94 percent of net job growth was in the alternative work category from 2005 to 2015 and more than 60 percent of the growth was due to the increase of “independent contractors, freelancers and contract company workers.”

In so many words, almost all of the 10 million jobs created between 2005 and 2015 were in the alternative, contract, temporary or gig economy.  Take the 327,000 Uber drivers who are classified as independent contractors as examples of the new workforce.

U.S. Wage Status

The status of wages in the U.S. isn’t impressive: overall wages are low.  Sure, non supervisory, non-agricultural wages nearly doubled from $12.27 to $21.23 per hour from 1947 to 1973, an average growth rate of 2.1 percent per year based on 2013 dollars.  But the average hourly wage was just $20.13 per hour in 2013, a 5 percent decrease from what it was in 1973.  Overall wage growth was slow or flat for this 40 year period which means workers can’t pay their bills and debts.

Who Are Low Income Workers?

Researchers at the National Employment Law Project found 42 percent of U.S. workers were paid less than $15 per hour or $31,000 per year in 2015.  They also found women and minorities were overrepresented in jobs which pay less than a $15 per hour;

  • Female workers made up 54.7 percent of employees paid less than $15 per hour, they were 48.3 percent of the U.S. workforce.
  • African Americans made up 15 percent of employees paid less than $15 per hour, they were 12 percent of the workforce.
  • Latino Americans made up 23 percent of employees paid less than $15 per hour, they were 16.5 percent of the workforce.

This means more than half of African American workers and close to 60 percent of Latino workers were paid less than $15 per hour in 2015.  The researchers also noted how low wages weren’t just for young workers: 46.4 percent of U.S. workers who were paid less than $15 per hour were older than age 35.

Low income jobs were overrepresented in retail and sales: nearly 3 million cashiers and sales associates were paid less than $15 per hour.  Food service and preparation jobs had the greatest concentration of employees paid less than a $15 per hour.

More than 50 percent of all workers were paid less than $15 per hour in the following sectors; farming, fishing, forestry, personal care services, building, custodial and maintenance, home health care, sales and transportation and moving.  60 percent of service workers in restaurants and bars, private households, agriculture, personal-laundry, hospitality, retail and administration were paid less than $15 per hour.

Notice, workers in retail, food service and preparation, laborers and movers, custodians, nursing assistants and personal health aides had median wages less than $15.  Curiously, jobs in these sectors will be some of the fastest growing in the near future.

But to be fair, incomes have slightly grown in the U.S. since 2015.  “Real median household income was $56,500 in 2015…up from $53,700 in 2014.  That 5.2 percent increase was the largest, in percentage terms, recorded by the bureau since it began tracking median income statistics in the 1960s.”  Growing incomes reduced the U.S. poverty rate by 1.2 percentage points in 2015 which was the steepest decline since 1968.

U.S. Work Hour Status

The status of work hours in the U.S. is exhausting.  To put it more bluntly, employees are working a lot of hours.  In 2014, researchers at Gallup found 50 percent of all employees reported working more than 40 hours per week.  39 percent said they worked more than 50 hours per week and 18 percent said they worked more than 60 hours per week.

A recent poll by Marketplace found that the top concern for half of all hourly workers isn’t that they work too much but that they work too little and need the income to pay their bills.  Researchers at the Economic Policy Institute found the average worker worked 1,687 hours in 1979 and 1,868 hours in 2007, almost an 11 percent increase or an additional 4.5 work weeks per year.  Work hours grew 20.3 percent for female workers while work hours grew by 4.4 percent for male workers over the same period.

Hours worked grew fastest among workers in the bottom fifth of income earners, 22 percent from 1979 to 2007.  Middle income employee hours worked increased by 10.9 percent over the same period.

To be fair, the top 5 percent of income earners are working more hours too.  This groups’ hours worked grew by 7.6 percent and their wages grew by 30.2 percent from 1979 to 2007.  Real hourly grew by 14.8 percent if the dot-com boom years of 1995 to 2000 were excluded.

The top 5 percent of income earners took home $110,000 in 2014 but these employees don’t work for the same reason working class and middle class employees do. The top 5 percent of income earners don’t work out of necessity, to pay their bills or support families, they work for pleasure, powerconspicuous consumption or production or as one writer bluntly put it “because they can.”

What Do Managers Think Of Employees Who Work So Many Hours?  

Professor Erin Reid of Boston University’s Questrom School of Business found managers couldn’t tell the difference between employees who worked 80 hours per week and those who just pretended to work “hard.”  Reid found managers reprimanded employees who were transparent about working fewer hours but she didn’t find any evidence that these employees were any less productive or any sign that the employees who worked more hours were any more productive.

Take the legal system as an example of “hard” work, workaholism.  Let’s assume going to court is very expensive and also acknowledge the fact lawyers at large law firms frequently complain about the number of hours they work each week.  Given these details, one could believe reaching a resolution via arbitration or deposition is the function of the legal system.  But in reality, we find this system is a “socially unnecessary arms race, wherein lawyers subject each other to torturous amounts of labor just because they can.”  Technology and professionalism used to limit arms races in the past but these factors have actually increased competition and productivity which means both sides work more hours today.

U.S. Economic Inequality Status

The status of income inequality in the U.S. is infamous: economic gains aren’t being shared equally and most U.S. workers are too poor to work fewer hours.  For some historical context, the U.S. experienced great economic inequality from the beginning of the 20th century until 1929 when the Great Depression happened.

The New Deal was implemented which reduced economic inequality and the middle class grew until about 1978, when the top 0.1 percent’s share of income began to increase from 7 percent in 1979 to 22 percent in 2012, a level nearly as high as the economic inequality experienced in 1929.  The bottom 90 percent of income earners’ wealth however began to fall in the 1980s.

The top one percent of income earners brought home $393,941 per year in 2014.  Keep in mind, not only did this group earn hundreds of thousands of dollars per year, they also earned millions and billions of dollars per year.  For some context, 15,656,000 millionaires or 46 percent of the world’s millionaires lived in the U.S. in 2015.  Notice how this group believes they work “harder” than low income earners.

Furthermore, the U.S. had the third worst GINI coefficient in the world among OECD countries in 2014 behind Mexico but ahead of Turkey and the U.S. was second only to Israel in terms of worst relative poverty rates among OECD countries.  1.5 million families and 3 million children had incomes of less than $2 per person per day in the U.S. in 2015.  A third world population lives in the U.S.

Moreover, economic mobility in the U.S. is declining.  Researchers at the National Bureau of Economic Research found absolute income mobility rates, the fraction of children born in the U.S. who will earn more than their parents fell from nearly 90 percent for children born in 1940 to just 50 percent for children born in the 1980s.

What is behind the economic inequality?  Economists found 19 percent of income inequality was correlated to a persons race, gender and parents’ income.  Also, the neighborhood a child grows up in could effect their incomes into adult life.  “The outcomes of children whose families move to a better neighborhood – as measured by the outcomes of children already living there – improve linearly in proportion to the amount of time they spend growing up in that area, at a rate of approximately 4 percent per year of exposure.”

What Effect Does “Hard” Work Have On The Middle Class?

It appears high income earners benefit from “hard” work the most in the U.S.: less than half of the share of wealth went to the middle classes in the U.S. in 2015, the lowest share  when compared to the rest of the developed world.

Furthermore, the U.S. is experiencing economic polarization as the middle class is being hollowed out.  Experts at Pew Research found the share of adults in the U.S. living in middle-income households fell from 61 percent in 1971 to 50 percent in 2015.  The share living in the upper-income tier grew from 14 percent to 21 percent over the same period.  And the share living in the lower-income tier grew from 25 percent to 29 percent.  Notice how the 7 percent increase in the share at the top was almost double that of the lowest share.

Finally, the middle class been stuck recently: the middle class was 25 percent more likely to stay in the middle class from 1996 to 2006 than they were from 1976 to 1986.  The Pew Research Center reported in August that 71 percent of middle class adults say it’s harder to get ahead now than 10 years ago, an increase 9 percent since the Great Recession from 2007 to 2009.  Economists found Americans in the highest and lowest income quintiles were far more likely to remain in these quintiles than people in the three middle quintiles.

Do People Think A Poor Person Can Become Wealthy With “Hard” Work?

The number of people who believe it’s possible to be born poor and become rich with “hard” work has steadily declined over the last decade.  80 percent of survey respondents said it was possible in March, 2005, 75 percent said it was possible in October, 2011.  71 percent said it was possible in July 2012 and just 64 percent of survey respondents said it was possible in 2014, the lowest proportion since January, 1983 when the New York Times began asking this question.

The Pew Charitable Trusts ran a similar survey and found more than 40 percent of Americans believed “hard” work, ambition and drive were the most important factors to improve someone’s economic condition.  However, a little more than 10 percent believed that coming from a wealthy family was a more likely determinant of success.

Big Picture

Given the information above, it seems like job growth has been slow, jobs are scarce, wages are low and employees are working a lot of hours.  The qualifying condition here is someone has to be hired by someone else in the first place.  Fewer jobs in proportion to the population, low wages and long work hours have exacerbated economic inequality.  And it seems like race and gender is having an effect on workers: female minority workers are getting shorted the most in this economy.

Furthermore, while wages have stagnated overall and people are working more hours each week, the costs of living have increased.  Essential life expenses like health care, housing and higher education used to be 25 percent of gross domestic spending in 1980.  However, the costs of these three factors grew to 36 percent of spending in 2015.

It seems like “hard” work won’t reduce economic inequality or make employees wealthy.  In fact, researchers found “hard” workers, or workaholics tend to be less efficient than their coworkers because it might be difficult for them to play as part of a team.  They might also struggle to delegate work to their coworkers and take on so much extra work that they become unorganized.  Researchers found four distinct workaholic “working styles;”

  • Bulimic workaholics feel the job must be done perfectly or not at all, they fail to even start a project and rush to complete it by the deadline.  They often frantically work to the point of exhaustion with sloppy results.
  • Relentless workaholics often take on more work than can possibly be done.  They balance too many projects, work too fast or are too busy to pay attention to details.
  • Attention-deficit workaholics starts with great intensity but lose interest and fail to complete their projects.
  • Savoring workaholics are slow, methodical and overly scrupulous.  They often have trouble letting go of projects and don’t work well with others.  They’re perfectionists and frequently miss deadlines.

In many ways, “hard” work harms workers and their employers.  One study found “hard” workers who put in 55 hours per week experienced greater stress than employees who worked 40 hours per week.  This stress was correlated to lower scores in “vocabulary tests,” “fluid intelligence” and “cognitive function.”

Moreover, employees who also worked “hard” were about 12 percent “more likely to become heavy drinkers”, experience greater sleeplessness, depression, diabetes, heart disease, increased absenteeism, turnover and higher health insurance premiums than those who didn’t.

The Meaning Of “Hard” Work

“Hard” work is a meaningless, self congratulating label, a pat on the back for those who believe they worked “hard” given their income.  But I think the role technology plays in the economy proves the point, we work to live, not live to work.  More concisely, the point of work is to not have to work so “hard” to live a high quality life, to earn wages great enough to not have to struggle.  Otherwise why work at all?

Furthermore, getting a job in the first place requires a lot of work.  And to claim someone works “hard” is to assert bourgeois privilege over other employees and the unemployed.

Ironically, it seems like we confuse effort with worry.  “We don’t correlate our sense of responsibility with what we are actually producing. We correlate it with how hard we are being on ourselves,” wrote Dan Pallotta for the Harvard Business Review blogpost. “I can hunch over my computer screen for half the day churning frenetically through emails without getting much of substance done, all the while telling myself what a loser I am, and leave at 6pm feeling like I put in a full day. And given my level of mental fatigue, I did!”

The Limits Of “Hard” Work

We know there are limits to “hard” work given the consequences that “hard” work has on our leisure time, personal relationships and quality of life.  John Maynard Keynes wrote the “Economic Possibilities for Our Grandchildren” in 1928 in which he imagined what the world would look like in a hundred years.  He predicted the “standard of life” in Europe and the U.S. would improve so much, no one would work more than 15 hours per week.

Economists at the New Economics Foundation put forth a proposal of “21 Hours” worked per week in 2010 to reduce “widening inequalities, a failing global economy, critically depleted natural resources and accelerating climate change pose grave threats to the future of human civilisation.”

Their plan would tax high income earners more progressively and address critical issues facing the 21st centurt workforce, issues like”overwork, unemployment, over-consumption, high carbon emissions, low well-being, entrenched inequalities, and the lack of time to live sustainably, to care for each other, and simply to enjoy life.”  The plan calls for increasing the minimum wage and reducing hours worked per week so workers can maintain their current incomes.  With this plan, the overall workforce participation rate which is low should increase.

Keynes would probably uphold European countries as a model of planning for the next workforce if he were alive today.  Sweden, Finland, Germany, the Netherlands, Denmark and the United Kingdom are 6 of the top 10 most competitive countries in the world.  And law makers in each of these countries prohibited employers from scheduling employees to work more than 48 hours per week.

Employers could also take the initiative to reduce employee work hours and in doing so they might get healthier and more productive employees.  When hospital administrators reduced nurses daily hours worked from 8 to 6, they found “The nurses working six hours took 4.7 percent fewer sick days and fewer work absences than when they worked eight-hour days.”  And “A comparative group of nurses working eight hour days actually increased the number of sick days during the trial by more than 60 percent.”

Essentially, we need to increase labor’s share of the economy: workers’ earnings as a share of gross domestic product must be more equitable.  Labor’s share of the economy fell from 63 percent to 57 percent of GDP between 2000 and 2012.  And even though it recently grew to 58 percent, this is sill inadequate.

Planning For The Next Workforce

We know there are diminishing returns to “hard” work.  I suggest we stop working “harder” and instead work “smarter” using technology and public policy.  Social change happens slowly, so let’s think about who are the people most likely to need reform for the next workforce.

It seems like minority female workers who are paid less than $15 per hour would benefit the most from public policy reform.  Law makers would be smart to craft public policy around their general needs like access to high-quality jobs that pay more than $15 per hour, decreasing poverty and encouraging economic upward mobility.  Specifically, law makers should focus on a $15 per hour minimum wage, shorter work weeks, affordable or public housing, public higher education, single payer healthcare planpublic daycare and paid family medical leave.

Let’s say a minority female restaurant server lives in the Washington D.C area, one of the most expensive cities in the U.S.  Let’s also assume she is single, is younger than age 35 and has a 3 year old child and doesn’t utilize the Head Start program.  She has two jobs both of which pay $13 per hour and she works 60 hours per week combined.  Her gross annual income is $40,560 per year, her net income is $31,371 and she isn’t entitled to overtime compensation because she doesn’t work more than 40 hours per week for either employer.

Her paid hours worked end when she leaves her job but her unpaid hours worked never actually stop at home.  Law makers would be smart to recognize how interconnected affordable and public housing and daycare costs are.  The absence of a federally funded public daycare program means this worker might have to spend up to $22,631 in daycare expenses each year which is more than “three times what it costs for a year at a public college” in Washington, D.C.

Furthermore, affordable or public housing is scarce in Washington, D.C. with “more than 70,000 people waiting for one of 8,000 units.”  The scarce amount of affordable, public housing means she might consider moving to the suburbs where rent is cheaper than in the city but requires more time to commute to and from work each day.  The tradeoff is, by moving to the suburbs, she will spend fewer hours with her child and will need a public daycare program.

Law makers would also be smart to further subsidize public higher education to the point it is affordable or free.  This could be done by taxing Wall Street transactions one cent each, millions of which happen every day.  With affordable or free public higher education, this worker could get the education and skills necessary to be competitive in the next workforce and work one full time job.

But a single payer healthcare plan would benefit this employee the most.  The single payer healthcare plan also known as “Medicare For All” is a system in where “a single public or quasi-public agency organizes health care financing, but the delivery of care remains largely in private hands. Under a single-payer system, all residents of the U.S. would be covered for all medically necessary services, including doctor, hospital, preventive, long-term care, mental health, reproductive health care, dental, vision, prescription drug and medical supply costs.

At the moment, the U.S. spends “more than twice as much as the rest of the industrialized nations.”  This is the equivalent of $8,160 per person.  Notice, the U.S. healthcare system underperforms in terms of life expectancy, infant mortality and immunization rates when compared to other developed countries which already provide comprehensive coverage to their populations.

In fact, the U.S. spends more on health care and provides less coverage than other developed countries because of its patchwork system of for-profit payers.  Overhead, underwriting, billing, sales and marketing departments, profit seeking motivation exorbitant executive pay are behind these costs.  Doctors, staff and administrative costs make up 31 percent of total U.S. healthcare expenses.

A single payer healthcare plan could save the U.S. more than $400 billion per year, enough to provide comprehensive coverage to everyone without paying any more than we already do.  The single payer plan would make premiums disappear and 95 percent of all households would save money on their healthcare expenses each month.

Law makers should introduce a single payer healthcare plan.  It would be funded by the savings derived from replacing our inefficient, profit-oriented, multi payer insurance system with a single streamlined, nonprofit, public payer plan.  Sure, this will mean increasing taxes on millionaires and billionaires but the U.S. already has millions of them.  H.R. 676 also know as the Expanded and Improved Medicare for All Act is based on the Physicians For A National Health Program’s Physicians’ Proposal to establish a national single-payer health insurance system.

Critical Analysis Of “Hard” Work

Hard” work dates back to the Protestant Work Ethic, it’s an antiquated term which seems to be most applicable when describing work performed on the farm, when concentration, efficiency and timeliness effected harvests.  Work used to be where one “learned discipline, initiative, honesty, and self-reliance—in a word, character.”  However this is no longer the case: technology made society exponentially more productive which negated the necessity to work “hard” and to just work.  Technology is behind our increased quality of life.  And let’s be honest, a lot of jobs are bullshit anyway.

But the tradeoff of improved technology is more workers are subject to their supervisors today.  Managers determine whether an employee works “hard” or not and who gets hired or fired.  Supervisors have so much discretion over the workforce, they could keep a reserve workforce waiting if they wanted to, in theory.  However, this is unlikely given firms have to run at maximum productivity and new employees are usually only hired after current employees have achieved maximum productivity.

We should note here that workers don’t just work for the sake of work alone which is what “hard” work seems to also imply.  Workers work for wages, high wages and we have a problem if wages don’t pay the bills commensurate to the cost of living.  This means employers should offer more competitive compensation and benefit plans to employees so they become more productive given firms might lose their best workers to companies that do.

Furthermore, each employee has different strengths, weaknesses, different experiences, education and skills.  Maximizing the productivity of each worker with their experience, education and skills in mind is of course the job of the supervisor.  Therefore, if an employee isn’t working “hard,” it isn’t the fault of the employee, it’s the supervisor’s fault: they could fire an employee who doesn’t work “hard” and hire someone else.

But the more important qualifying condition and argument behind “hard” work is whether someone is employed in the first place.  It seems like “hard” workers have a confirmation bias and use the unemployed as examples of what workers shouldn’t be to distinguish themselves from non “hard” workers and the unemployed.  The rhetoric sounds something like this, “the unemployed didn’t work “hard” and if you want to keep your job, you better be a “hard” worker too, or else you’ll end up just like them.”

But anyone trained in class solidarity, us versus them psychology and critical theory knows separating workers is how management creates class conflict.  Therefore, it’s in every employee’s interest to avoid being separated and reject “hard” work: everyone works “hard” or no one works “hard.”

Conclusion

Workers would be smart to recognize “hard” work is a hollow label and instead focus on achieving a higher quality of life with better wages, benefits and a better work-life balance.  We should demand our law makers pass legislation for $15 per hour minimum wage, shorter work weekspublic housing, public higher education, single payer healthcare planpublic daycare and paid family medical leave.  And if we don’t get what we want, let’s re-organize and try, try again.

Are Uber Drivers Independent Contractors Or Employees?

 

 

Hey everyone, I hope you’re doing great.  I took a few weeks off from blogging to move but I’m back to write about Uber drivers and how they’re classified as independent contractors.  I want to find out whether drivers really are bona fide independent contractors.  So let’s critically analyze the employment relationship of these drivers to see if they’re correctly classified.  But first, here are some quick facts about Uber;

  • Uber is huge.  The company operates in 58 countries and 450 cities.  40 million people used Uber just last month.
  • There were 327,000 Uber drivers as of October, 2015 however only 4 percent of the staff will be with the company one year after they began.
  • The company grossed $6.5 billion, $2.1 billion and $1.1 billion in sales in 2016, 2015 and 2014 respectively.
  • Uber CEO Travis Kalanick was worth approximately $6.3 billion approximately in 2014.  Kalanick joined the Trump administration as an economic advisor in December, 2016.  After Mr. Trump issued an executive order which banned international travel from 7 Muslim majority countries, the New York Taxi Workers Alliance staged a strike at John F. Kennedy International Airport.  Uber then tweeted it would increase surge prices at JFK.  Kalanick was criticized for this move and stepped down from the administration in February, 2017.  He later resigned as CEO in June, 2017 after “a slew of executive departures and sexual harassment allegations” had “raised concerns about the company’s ability to recruit female talent.”

Here are some quick facts about Uber drivers;

  • A full time driver’s salary in the U.S. could be as high as $90,000 per year according to Uber.  But this doesn’t include expenses like the lease, fuel, maintenance, insurance, tolls or other miscellaneous driver fees.  That $90,000 annual salary doesn’t sound like much given these expenses and the average wage could be as low as $3 per hour in other countries.
  • Uber drivers earn approximately $25 to $35 per hour on average in large cities like New York and Los Angeles.  But in smaller cities, drivers typically earn between $8 and $15 per hour or $30,000 per year salary.
  • Uber drivers must to be at least 21 years of age and have 3 years driving experience.  Drivers must have insurance, registration, licenses, a social security number, have a clean driving record and pass a background check.  Drivers must own or lease a 4 door vehicle which seats 4 or more passengers, excluding the driver.
  • Vehicles may not be older than 10 years, they can’t be marked, taxis or salvaged vehicles.  Vehicles must pass an inspection.
  • Uber drivers may not have any DUI or drug-related offenses, incidents of driving without insurance or license, fatal accidents, history of reckless driving or a criminal history.  If a someone doesn’t own or lease a vehicle which is less then 10 years old, the driver may rent or finance an automobile from Uber, making them both a driver and consumer.

Employment Relationship Test

The U.S. Department of Labor defines the employment relationship between workers and their employers.  To employ someone is to “suffer or permit” them “to work” on the behalf of an employer.  This implies the worker expects to be paid for their hours worked.  Furthermore, workers are economically dependent on their employers while independent contractors aren’t.

There are six parts to the independent contractor test.  Let’s critically analyze the relationship between Uber and their drivers to see if they are in fact classified correctly as independent contractors.

First, the extent to which the work performed is an integral part of the employer’s business.  Uber claims it is a technology company, however the purpose of a technology company is to innovate, to make technology better, faster and cheaper.  Technology firms usually improve upon existing technology to save on labor and consumer costs.  But it seems like Uber receives the vast majority of it’s revenue from drivers who provide rides to riders, not from making technology more efficient.  Providing riders with rides is an integral part of Uber’s business: this is how the firm brings in revenue.

Second, whether the worker’s managerial skills affect his or her opportunity for profit and loss.  For someone to be a bona fide independent contractor, they must be able to hire and fire their own employees, set their rates of pay, promote, demote, evaluate employee performance and manage a budget.  If an Uber driver was a bona fide independent contractor, they should be able to hire drivers to drive for them.

Third, the relative investments in facilities and equipment by the worker and the employer.  This point somewhat applies given the driver must finance, lease or own their  vehicle.  However, Uber sets the conditions for the quality of the vehicle and inspects the vehicle.  Also, if the driver wrecks their vehicle in an accident, the driver is liable for the repairs.

Fourth, the worker’s skill and initiative.  Drivers must of course obtain a driver’s license, but just because someone has a driver’s license doesn’t mean they exercise independent business judgment.  Drivers don’t compete with other drivers: they don’t own their own businesses, can’t negotiate their rates of pay, nor can they hire and fire employees.  Drivers do whatever Uber tells them to do, they have little discretion in how work is assigned to them.

Fifth, the permanency of the worker’s relationship with the employer.  Drivers drive until they quit.  Of course, they could be deactivated if they don’t drive at least once per month.  Once a driver’s account is deactivated because of low ratings or inactive period, Uber will have the driver take a class to be reactivated for $100.  But being deactivated sounds similar to being fired and if someone was fired, they must have been hired in the first place.  My point is, only employees may be hired and fired.  An independent contractors’ contract will have an expiration date in which all work work must be completed.  And if the work isn’t completed, their bond is pulled.  So without a contract and expiration date, the employment relationship is permanent until the driver resigns or is terminated.

Finally, the nature and degree of control by the employer.  Uber controls prices for consumers and wages for drivers.  They also get 20 percent of the fare.  Prices increase or surge during rush hours and weekends when consumer demand is high.  Prices fall during the middle of the day, giving consumers an opportunity to take advantage of affordable ride shares.  But driver’s however don’t get to set their rate of pay nor do they control the ride information, rates or prices.

It seems like the only factor drivers actually get to control for is their schedule.  However, this is a moot point because the employment relationship essentially says, “I’ll come work and work for you. You give me some money, and for the duration that I’m working for you, I am under your authority. What I do with my time, where I stand, where I go, who I talk to, how many bathroom breaks I take, where I look, how fast I work  all this is not at my discretion. It’s at the discretion of you, the employer.”

“That waking time, for most people in the world, is most of their waking day. That working time comprises anywhere between two-thirds to three-fourths of all the time that they’re awake — which means, effectively, that three-fourths of their active life is spent giving up their autonomy to somebody whose interests are lined up against their own interests.”

“This lack of autonomy inside the workplace is often compounded by being under their employer’s control outside the workplace.”

Based on this critical analysis, it’s unlikely Uber drivers are bona fide independent contractors.  They’re employees.  The California Labor Commission agreed with this determination when it decided drivers were employees in June, 2015.  The same decision was made in the U.K. in October, 2016.

This begs the question, why would an employer classify an employee as an independent contractor in the first place?  There are several reasons an employer might do this;

  1. The income tax burden is placed on the independent contractor.  They must save part of their income throughout the year to be able to pay taxes in April, the following year.
  2. Independent contractors are exempt from the minimum wage and overtime rights under the Fair Labor Standards Act which makes them vulnerable to wage theft.
  3. Furthermore, employees misclassified as independent contractors wouldn’t be covered by laws enforced by the Equal Employment Opportunity Commission, adding another layer of vulnerability.  “The EEOC protects the workplace civil rights of employees, including prohibitions of employment discrimination based on factors such as age, race, gender, or disability.”
  4. The state is deprived of revenue to be paid in unemployment insurance and Social Security.
  5. Employers weaken collective bargaining power when workers are misclassified as independent contractors because they aren’t covered by the National Labor Relations Act.
  6. Independent contractors usually aren’t allowed to enroll in employer paid health care and pension plans which saves employers money.

The crux of the matter isn’t so much Uber misclassifying employees as independent contractors but how common it is for firms in general to misclassify employees this way.

“In 2000, the Department of Labor commissioned a study that estimated that nearly a third of all employers misclassified some employees as contractors; a 2005 study found that more than 10 percent of workers in the private sector had been wrongly designated as contractors. In 1993, 7 percent of workers were independent contractors or temps; economists estimate this number will grow to 20 percent by 2020.”

Case Study: FedEx

When FedEx Ground and FedEx Home misclassified drivers as independent contractors, unemployment insurance, Social Security, employee health care and pension plans went unpaid and the company saved some 40 percent in labor costs.

But the Ninth Circuit Court of Appeals ruled FedEx misclassified 2,300 workers as independent contractors in California and Oregon in August, 2014.  The Kansas Supreme Court ruled FedEx drivers were employees, not independent contractors in October, 2014.  FedEx was caught for their wrongdoings, they eventually settled the lawsuit in California for $228 million in June, 2015.  Uber drivers can find more information here.

Which employees are most likely to be misclassified as independent contractors?

Workers in the custodial, home health care and secretarial services are most likely to be misclassified as independent contractors.  And women and immigrants are overrepresented in these fields: at least 1/3 or some 3 million of the more than 10 million independent contractors employed in these low wage jobs would benefit if they were reclassified as regular employees.

Furthermore, drivers have mixed feelings as to whether they are employees or independent contractors.  NPR conducted an informal survey and contacted drivers via email lists and social media.  491, or a little more than half of the drivers responded they felt like they were their own boss while 436 said they didn’t.

What Should We Do About Misclassification?

Consumers and drivers are active agents in the ride share economy.  And they have the power to bring about great change in two different ways.  The first option would be to publicly demand that Uber properly classify their drivers as employees by a certain date.  If that date passes and firm failed to meet consumer and driver demands, the next best option would be to stop accepting rides from Uber and for drivers to drive for someone else.

Conclusion

Uber drivers are misclassified as independent contractors.  They should be classified as employees.  And the sooner consumers and drivers demand Uber reclassify these independent contractors as employees, the better off we’ll all be.

 

Violence Won’t Yield Security

IMG_0022

Hey there reader, I hope you’re doing great.  You probably saw the video of Dr. David Dao being forcefully removed by airport police from a United Airlines flight on April 9.  Bloggers responded in typical fashion;

“While there may be something to be said for the ability for private businesses to summon the help of the police to remove people from their premises if they refuse to leave peacefully and their presence is unwanted, there is no excuse for the police to cooperate when the reason their presence is unwanted is not “causing a disturbance” or being violent or threatening to other customers, or stealing goods or services, or doing anything wrong at all, but rather wanting to peacefully use the service they legitimately paid for.”

Then pilot-wife Angelia J. Griffin published this piece; “Knowing what I know about airport security, I’m certainly not going to run back into a secured, federally restricted area at an airport flailing my arms and screaming like a banshee…because, you know, that just happens to be breaking a major federal Homeland Security law.”

“But that’s just me. Obviously.”

“The moment I made that particular ill-advised choice, I would become an immediate and imminent threat to the aircraft’s security. That’s kind of a big deal. I mean, come on, I once actually had to remove my infant son’s socks because they mimicked little baby sneakers. These guys mean business.”

Airport police didn’t brutalize Dao to secure the airplane, they brutalized him to enforce the Airlines’ private boarding priority policy which reads;

If a flight is Oversold, no one may be denied boarding against his/her will until UA or other carrier personnel first ask for volunteers who will give up their reservations willingly in exchange for compensation as determined by UA. If there are not enough volunteers, other Passengers may be denied boarding involuntarily in accordance with UA’s boarding priority.”

Derek Thompson at the Atlantic wrote, “That “boarding priority” protects minors and disabled people and makes special commendation for “fare class [and] status of frequent flyer program.” In other words: Don’t worry, First Class folks, you’re safe.”

United Airlines is responsible for removing Dao, it’s their policy after all.  Airport police escalated the situation after Dao refused to leave and they beat him.  Dao suffered from a “concussion, a broken nose, a sinus injury and lost two front teeth” and will have to have reconstructive surgery as a result.  But according to Griffin, all of Dao’s injuries were suffered for the sake of security.

I disagree, violence won’t yield security, violence multiplies violence.  If violence equaled security, the U.S. would be the most secure place in the world.  But it isn’t.  Instead, American law enforcement agencies have an awful history of brutalizing and killing civilians, especially minorities.  More than 1,000 civilians are killed by U.S. police each year.

Police brutality is so pervasive in the U.S., the United Nations condemned our law enforcement agencies for brutality in August, 2014.  Critical thinkers know, it’s impossible to separate race from conflict in the U.S.  It’s also impossible to separate violence from a system which enforces a classist, capitalist private boarding priority policy.

United Airlines would have been smart to hold an ad hoc auction.  The Airline could have taken bids starting at $800 and increased them in $200 increments until they find a traveler willing to voluntarily deboard.  The cost of finding a someone willing to voluntarily deboard, whether it was $2,000 or $20,000 would surely be less than the costs suffered from this public relations disaster.

Instead, the Airline chose to use airport police to forcefully remove Dao.  And within two business days, “shares of United fell as much as 6.3 percent in pre-market trading, dropping $1.4 billion from the now $21 billion company by market cap. By early trading Tuesday, shares were down 4 percent.”

Airport security is for the most part theater.  “When people are scared, they need something done that will make them feel safe, even if it doesn’t truly make them safer.”  This illusion surely comes from our obsession over the events which happened on September 11, 2001.

It’s foolish to believe peace and security can be achieved through violent means.  We’d be smart to recognize Dao’s removal for what it really was, the unauthorized use police violence to enforce a private agreement.  Let us demand a better system from the police and airlines.  I think the best way to prevent police brutality and airline abuse from happening in the future is to stop participating in these systems until we get the outcomes we desire.

 

Who Benefits Or Suffers From The Selective Service System?

Hey there reader.  I hope you’re doing great.  I had an interesting conversation with a colleague about American Exceptionalism the other day.  We analyzed several factors which make the U.S. unique among other nations, institutions like our 800 military bases around the world, our use of daylight savings to save, uh, daylight, our use of the Fahrenheit system to measure temperature, our use of the Imperial unit system to measure length, mass and volume.  Why don’t we use the metric system?  Exceptionalism.

Of course, we’re responsible for reinforcing or reforming these institutions and traditions, we’re active participating agents in society and workforce.  One institution, the Selective Service System seemed especially backwards for the 21st century.  I think it’s time we critically analyze the Selective Service System to find out what it’s purpose is, who benefits or suffers from it and if it’s worth continuing.  But first, here are some quick facts about the System;

  • The program’s annual agency budget was $22,700,000 in fiscal year 2016.
  • 91 percent of all men between the ages of 18 and 25 years old have registered.
  • 95 percent of all men between the ages of 20 and 25 years old have registered.
  • Approximately 17 million men between the ages of 18 and 25 were registered as of 2015.
  • Staff constitutes 124 full time federal employees and 175 part time military reserve force officers.  The Selective Service System is headquartered in Arlington, VA and applicants can register online.

U.S. Draft History

Conscription first came about during the American Civil War.  The Confederacy drafted troops in April, 1862, more than a year before President Abraham Lincoln signed the Union Conscription Act into law.

Union conscription wasn’t universal: draftees could pay a “$300 fee to avoid service” or hire substitutes to fight in their place.  The subjective nature of the draft led to the Draft Riots of 1863 when Irish-Americans attacked federal buildings, African-Americans and black orphanages in New York City.  The Union draft was suspended after the rebels surrendered in 1865 and the Army maintained a peace-time force of more than 100,000 men for the next half century.

Voluntary enlistment was slow at the beginning of World War I,  President Woodrow Wilson created the Selective Service System in May, 1917.  Congress passed the Espionage Act in June, 1917 which suppressed dissent of the U.S. entering the war.  Nearly 2.8 million troops entered into military service over the next two years, however armistice was achieved in November, 1918 and the draft was suspended.  Political dissent was further suppressed during the 1920s in the Palmer Raids led by a young J. Edgar Hoover.

It was reinstated again in September, 1940 when President Franklin D. Roosevelt signed into law the first ever peace-time lottery draft in U.S. history with the Selective Training and Service Act.  Nearly 45 million men registered and more than 10 million entered into military service between November, 1940 and October, 1946.  The Act expired in March, 1947, the same year the Cold War began.

President Harry Truman signed the Selective Service Act in June, 1948 which “established the first postwar draft in American history.”  The Act was scheduled to expire in June, 1950 but the Korean War began and Congress extended it for another year.  The Act was reauthorized as the Universal Military Training and Service Act in 1951, more than 3 million men entered into military service during Korean War until 1961.  

Protestors and rioters criticized the draft during the Vietnam War: family status and academic credentials made deferment an option for some privileged young men.  Anti-war sentiment grew and President Lyndon Johnson signed the Military Selective Service Act into law in June, 1967 to rationalize the deferment system.  But the draft remained unpopular and polarizing and dissenters burned their Selective Service System Registration Cards.

“The American people were generally willing to accept” the draft “when service was perceived as universal.  However, in the 1960s, that acceptance began to erode.”

President Richard Nixon amended the Military Selective Service Act and returned the process to a lottery draft in 1969.  1.7 million men entered into the armed forces through the Selective Service System from 1965 and 1973.  But the draft remained unpopular: an estimated 500,000 men dodged the draft by moving to foreign countries or by refusing to respond to draft letters and 200,000 men were charged with draft evasion.  8,000 of them were convicted between 1965 and 1973.

The Department of Defense announced it would suspend the draft in January, 1973 and the Military Selective Service Act expired six months later.  President Jimmy Carter reactivated the Military Selective Service Act by executive order on July, 1980 which required men aged 18 to 26 years to register with the Selective Service System.  President Ronald Reagan ran a campaign to abolish Selective Service System in 1980, however he didn’t try to terminate the System while in the White House.

U.S. Draft And Post Draft Analysis

The draft wasn’t practical or universal and in the years following World War 2.  Demographics played a part in public resistance to the draft during the 1960s: millions of young men were born during the baby boom and of military age at the time of the draft.  Also, the Vietnam War was a limited war, it was much smaller in scale than total wars which used the draft as the Civil War, World War 1 and 2 did.

Costs also made the draft impractical during the Vietnam War.  A voluntary military was a smaller than a conscripted military and public resources could be spent more efficiently on a smaller volunteer force than a larger, drafted force.

Wages and benefits would be more abundant if they were spent on a smaller, voluntary military.  Public protests against the war and draft were transformed into public support the troops who served voluntarily.  The professional military received higher wages, better benefits and served more years as a result.

“Besides good pay, careerists demanded quality-of-life benefits such as good housing, child care, health benefits, family advocacy programs, and military stores. It was crucial that the services become “family friendly.””

But conservatives and liberals opposed the draft.  Dissenters on the right argued the state had no right to impose military service on young men without their consent.  Leftists and centrists argued young men without family status or academic credentials were less likely to be deferred.

Scandals and public opposition compounded over time, the Vietnam War more became unpopular.  President Nixon declared war on drugs in 1971 and the public grew restless for a volunteer force.  Ironically, the Department of Defense began to lose confidence after draftees experienced drug problems in Vietnam.  The public could watch troops behaving badly on the television news each night.

Military service peaked in 1969 and fell by 61.8 percent in 2014.  But to be fair, the U.S. population increased by 37.5 percent from 202,680,000 to 324,797,000.  “The Army, Navy, and Air Force had significant cuts in the numbers of personnel with the end of the Cold War, while the Marine Corps numbers have stayed relatively flat.”  And even though the draft was abolished in 1973, troops are still recruited from mostly middle class and working class families today.  4.8 percent of the military personnel were immigrants as of 2008.

Who Benefits Or Suffers From The Selective Service System?

We know the System has high registration rates and it seems affordable.  However, it seems redundant and irrelevant given the draft was abolished in 1973.  Let’s analyze the structure of the System to find who benefits and suffers from the law, after all “A young man who fails to register with Selective Service may be ineligible for opportunities that may be important to his future.”

Based on this chart I found on the System’s website, registration is  required for all male citizens and immigrants residing in the U.S. between the ages of 18 and 25 years.  This list included physically and mentally handicapped men who are “able to function in public with or without assistance,” “permanent resident immigrants,” “refugee, parolee, and asylee immigrants,” “undocumented immigrants” and “dual national U.S. citizens.”

This list also included “U.S. citizens or immigrants who are born male and have changed their gender to female” but excluded “U.S. citizens or immigrants who are born male and have changed their gender to female.”

So what would happen if these eligible young men didn’t register with the System?  They’d be excluded from “federal student loans or grant programs…Federal Pell Grants, Federal Supplemental Educational Opportunity Grants (FSEOG), Direct Stafford Loans/Plus Loans, National Direct Student Loans, and College Work Study.”

The U.S. Citizenship and Immigration Services requires men of eligible age to register with the System to gain U.S. citizenship if he “first arrived in the U.S. before his 26th birthday.”  Citizenship, and the rights that come with it, such as voting are denied if they fail to register.

The Workforce Innovation and Opportunity Act of 2014 appropriated $10 billion to the States for job training programs so 20 million young men could find a vocation and enhance their career.  But if they didn’t register, they’d be excluded from job training.

“A man must be registered to be eligible for jobs in the Executive Branch of the Federal Government and the U.S. Postal Service.”  If they didn’t register, they’d be excluded from these jobs.

Often times the federal jobs listed above require a security clearance and background check.  “Security clearance background investigations will verify whether or not men are in compliance with federal law.”  If they didn’t register, they’d be excluded from these jobs and clearances.

Failing to register with the System is a felony with fine of up to $250,000 or a prison term of up to 5 years or a combination of both.  Anyone who knowingly counsels, aids, or abets another to fail to comply with the Act is subject to the same penalties.  However, millions of young men failed to register with the System.  In fact, only 20 men have been prosecuted for failing to register since 1980, the last indictment was in January, 1986.

So What’s The Purpose Of The Selective Service System?

The primary purpose of the System is to organize young men for the draft.  It also has several other manifest functions; the Solomon Amendment was inserted into the Military Selective Service Act in 1982 which made registration with the System a prerequisite for federal student aid.  And Thurmond Amendment made registration a prerequisite for federal jobs in 1985.  Today, many states have laws which mandate registration in order to obtain or renew a driver’s license.

But the latent effect of the System is to exclude people from fully participating in the workforce if they fail to register.  “The more immediate penalty is if a man fails to register before turning 26 years old, even if he is not tried or prosecuted, he may find that some doors are permanently closed.”

If the primary purpose of the System was to organize men for the draft no longer exists, then the System should be abolished given the harmful, latent effects it has on the men who suffer from failing to register.  These are the same men who would benefit from the System the most; high school dropouts, the unemployed, ex-offenders, legal and undocumented immigrants.

How much does it cost a State when someone fails to register?  Well, the System estimated men who failed to register in California were denied access to more than $99 million in federal and state financial aid and job training between 2007 and 2014.  A combined $35 million in federal and state financial aid and job training were withheld from citizens in Pennsylvania, New Jersey and Massachusetts from 2011 to 2014.

Big Picture

In order for a war to be just it must be fought in defense.  If we haven’t been attacked, then there shouldn’t be a draft which makes the Selective Service System moot.  Furthermore, wars are fought on a limited, instead of a total basis today.  We use better technology to minimize labor costs which puts fewer lives in harm’s way and out of combat.

And with fewer lives lost, the public has less cannon fodder to protest against.  In fact, support for the military has grown after the draft was abolished.  “A Gallup poll last June found that 74 percent of more than 1,000 Americans surveyed had “a great deal” or “quite a lot” of confidence in the military — versus 58 percent in 1975, at the close of the Vietnam era.”

In June, 2012 the Government Accountability Office published a report which stated the “DOD has not reevaluated requirements for the Selective Service System since 1994” and recommended “that DOD (1) evaluate its requirements for the Selective Service System in light of recent strategic guidance and (2) establish a process of periodically reevaluating these requirements.”

Abolish The Selective Service System

The System is a throwback the early days of the Cold War.  We haven’t been attacked in quite a long while and we no longer need or want a draft.  The System benefitted millions of men but at the same time excluded of millions of men who failed to register.  The System is wasteful and costs more than it’s worth.  The sooner we abolish it, the better.

California Is A Model State For Domestic Violence Prevention And Gun Laws

Hey there reader, I hope you’re doing great.  Domestic violence prevention and gun laws are personal policies to me.  My mother Carolynn died in Florence, Arizona on February 13, 2009.  Early reports about her cause of death were uncertain but the county coroner ultimately decided she died from blunt force head trauma.  Several of my mom’s friends reported her death wasn’t just an accident: she might have suffered from domestic violence given the bruises they sometimes saw on her back, neck and shoulders.  Her death is still considered an open investigation.

California state legislators have accomplished a lot in terms of preventing domestic violence and gun violence since the 1990s.  I think if legislators in Arizona and every state for that matter adopted similar laws, my mom might be still be alive today.  In fact, I think California‘s domestic violence prevention and gun laws are a model for all states to emulate.

U.S. Domestic Violence And Gun Ownership Facts

Historian Richard Hofstadter wrote in 1970, “The United States…has a history but not a tradition of domestic violence.”  Hofstadter noted how most of our domestic violence happened in urban areas and yet America’s gun laws reflect a rural, frontier lifestyle.

It seems rational to assume American legislators would pass strict gun laws to prevent domestic violence, especially for urban areas given 80.7 percent of the American population lives in cities and suburbs, where gun crimes are concentrated most.  Unfortunately, lawlessness and gun rights seem to be more common than gun restrictions.  This is because we have a history of resorting to violence to solve problems instead of refraining from it.

Domestic violence is so pervasive in the U.S., 1 in 3 women and 1 in 4 men have experienced some form of physical violence by an intimate partner at some point in their lives.  Firearms were the most commonly used weapon: 53 percent of all domestic violence attacks involved the use of a handgun and guns were 3.5 times more likely to be used to threaten or intimidate someone than to be used in self defense.  Here are some more quick facts about American domestic violence and gun violence;

  • Of the 2,000 domestic violence victims who die in the U.S. each year, 760 were married to the abuser and 80 percent of them were female.  Strangulation is a strong predictor of future female domestic violence death.
  • 76 percent of all homicide victims knew their attackers.  Women were more likely to be victims of violent crime by someone they knew, men were more likely to be victimized by strangers.  Women face the highest risk of being killed by their abuser when leaving or immediately after they left an abusive relationship.  The likelihood of a female domestic violence victim dying increased almost by a factor of 5 when the abuser had a gun.  68 percent of all homicides in the U.S. are committed with a firearm.
  • 68.7 percent of collateral domestic victim deaths–which included children, parents, siblings, friends or new partners of the victim–involved the use of firearms.  5 women are murdered by a gun owner in the U.S everyday.
  • American women were found 11 times more likely to be murdered by a gun owner than women in other high income countries.  More than half of all mass shootings began with or involved the shooting of an intimate partner or a family member.  Women and children made up only 15 percent and 7 percent of all gun homicide victims respectively but these 2 groups were overrepresented in mass shootings at a combined 64 percent from 2009 to 2015: 57 percent of mass shooters targeted family members or intimate partners before killing others.
  • States with high gun ownership rates have higher rates of domestic violence deaths when compared to states with lower gun ownership rates: one study found a 10 percent increase in female gun homicides was correlated to a 10.2 percent increase in state gun ownership rates.

National Domestic Violence Prevention Laws

Congress passed the Violence Against Women Act in 1994. The VAWA was the first national law regarding domestic violence and sexual assault crimes.  It provided federal resources to communities to curb domestic violence and sexual assault.

The law was reauthorized in 2000 and provided legal assistance to victims and expanded the definition of domestic violence and sexual assault crimes to include date violence and stalking.  It was reauthorized again in 2005 to meet the needs of a multilingual culture, enhanced domestic violence prevention programs, provided housing for victims and funded rape crisis centers.

It was reauthorized again in 2013 to provide lifesaving services for all victims of domestic violence including sexual assault, dating violence and stalking.  The scope of the 2013 expansion covered native born women, immigrants, LGBTQ victims, college students, young people and public housing residents.

The Family Violence Prevention And Services Act of 1984 is the federal funding source for domestic violence shelters and programs.  The FVPSA is administered by the Department of Health and Human Services, it expired in 2008 but it was reauthorized in 2010 to include funding for the Child Abuse Prevention and Treatment Act.  The law expired again in 2015.

National Domestic Violence Prevention Law Analysis

The VAWA provided funding to police departments so they could create and staff domestic violence prevention departments.  500,000 law enforcement officers, prosecutors, victim advocates and judges were trained on domestic violence issues each year as a result of the VAWA.  The VAWA also established the National Domestic Violence Hotline which has answered more than 3.8 million calls over the last 20 years.  Here are some more quick facts about the VAWA and FVPSA;

  • Fewer people are experiencing domestic violence, the rate of intimate partner violence declined 67 percent from 1993 to 2010.  The rate of female intimate partner homicides decreased 35 percent from 1993 to 2007, the rate of male intimate partner homicides decreased 46 percent over the same period.
  • More victims are reporting domestic and sexual violence to police.  Each state reformed its laws which previously treated date or spousal rape as a lesser crime than stranger rape and each state passed laws making stalking a crime.
  • Each states authorized warrantless arrests in misdemeanor domestic violence cases where the responding officer determines whether probable cause exists.  And each state made it a crime to violate a civil protection order.
  • Many states have passed laws prohibiting polygraphing of rape victims.  35 states, Washington, D.C. and the U.S. Virgin Islands have adopted laws addressing domestic, sexual violence and stalking in the workplace.
  • More than 2,000 domestic violence prevention agencies relied on funding from the FVPSA serve their clients, the federal budget for the FVSPA ranges between $130 to $175 million per year.

California State Facts

California is the most populous state in the Union with 39 million people.  And it has a huge economy: if it was its own country, it would be the 8th biggest economy in the world.  California in many ways represents the future of the U.S. with a majority-minority population with the largest of 4 major U.S. ethnic populations;

“California has more whites, Latinos, Asians and American Indians than any other state,” the Census Bureau says, “and its combined nonwhite population – 61.5 percent of 39 million Californians – is the second highest of any state.”

“Latinos have become California’s largest ethnic group at 15 million, followed closely by single-race whites at 14.9 million. Although Hawaii is the nation’s only Asian-majority state, California has its largest Asian population, 6.3 million.”

State Domestic Violence Prevention Law Summary

State legislators implemented a series of domestic violence and firearm removal policies to protect victims from abusers beginning in January, 2000.  Legislators passed a gun removal law where law enforcement officers were required to remove firearms when responding to a domestic violence incident regardless if the gun was used in the incident or not and regardless if the suspect was arrested or not.

Law enforcement officers were also required to remove guns from a domestic violence incident if there was danger involved, regardless if there was a temporary or permanent protective order.  Law enforcement agencies were also ordered to retain seized firearms for a minimum of 48 hours if they believed returning the seized gun to the abuser would endanger the victim.  The law enforcement agency would also have to file a court hearing to determine whether the firearms should be returned to their owner.  After issuing a temporary restraining order and hearing, the court would then order the suspect to surrender their firearms to a law enforcement officer or sell them to a licensed firearms dealer within 24 hours.  The suspect would have to provide the court with proof of surrender within 48 hours of being served.

“In 2014, California became the first state in the nation to allow family members and intimate partners to directly petition a judge to temporarily remove firearms from a family member if they believe there is a substantial likelihood that the family member is a significant danger to himself or herself or others in the near future.”

State Domestic Violence Prevention Law Analysis

California domestic violence calls per year decreased by 14 percent from 181,362 in 2005 to 155,965 in 2014.  Domestic violence calls involving a firearm decreased 34 percent from 1,233 in 2005 to 813 in 2014.  And domestic violence incidents involving a weapon decreased 31 percent from 93,027 to 66,645 during the same period.

Using a spreadsheet, I organized the Center for Disease Control’s data on domestic violence by state for 2010, the last year in which data was available.  The data included categories for the percentage of women raped by any perpetrator, the percentage of women sexually assaulted but not raped by any perpetrator, meaning the victim knew the attacker, the percentage of men sexually assaulted but not raped by any perpetrator, the percentage of women stalked by any perpetrator, the percentage of women raped, physically assaulted or stalked by intimate an intimate partner and the percentage of men raped, physically assaulted or stalked by an intimate partner.

I totaled the state-rates for each category and divided by 6 to find the state average rate.  The results of the average were conclusive: California was better than average across all 6 categories and had an 8.3 percent lower domestic violence rate per 100,000 residents than the national domestic violence rate.  California had the 6th lowest domestic violence given the available data.  And the state’s lowest, best score was 20.3 percent lower than the national average in the “percentage of women raped by any perpetrator,” category.  But note, states which had incomplete data were not applied and were omitted from the ranking with an NA.  See the data here;slide1

Also note, Alaska, Louisiana, Nevada, Oklahoma, South Carolina, New Mexico, South Dakota, Georgia, Tennessee and Texas had the highest domestic violence rates in 2014.

State Gun Law Summary

California law makers adopted an activist approach to public policy making and passed 11 gun restrictions in the 1990s and 2000s following the mass shooting in Stockton in 1989.  “Five children between 6 and 9 years old, all of them refugees from Southeast Asia, were killed and more than 30 people were wounded, about half of them critically, before the gunman shot himself to death.”

In 1994, state legislators prohibited individuals subject to domestic violence restraining orders from owning a firearm while the order was enforced.  In 1997, state legislators expanded the scope of the crime of carrying a loaded concealed firearm without a permit in motor vehicles to passengers.  The following year, manufacturers were further restricted to curb gun trafficking.

State law makers passed legislation prohibiting the manufacture and sale of handguns which lacked safety design standards in 1999.  They also required the U.S. Department of Justice to develop standards for firearm safety devices and prohibited individuals from purchasing more than 1 handgun in a 30 day period.  The state’s assault weapon ban was enhanced to a 1 feature test which made it more difficult for gun manufacturers to modify a banned weapon.

Legislators prohibited the sale and manufacturing of large capacity magazines capable of holding more than 10 rounds each in 2000.  The following year, the Handgun Safety Certificate Requirement was passed which required individuals to pass a written test and demonstrate safe handling practices before purchasing a handgun.  A state database was developed to record individuals who purchased multiple firearms the same year.

In 2003, law makers passed new restrictions on handgun models and required them to have “chamber load indicators” to prevent accidental shootings.  In 2004, the manufacture, sale and possession of military style 50 caliber guns were prohibited.  In 2007, new handgun models were mandated to be equipped with “microstamping” technology which imprinted identifying information on each cartridge case when the firearm is discharged.

Clerks were required retain handgun ammunition sales records in 2009.  Two years later, they were further required to retain rifle and shotgun sales records and individuals were prohibited from openly carrying unloaded handguns in public.  Domestic violence abusers were ordered to surrender their firearms when a protective order was served beginning in 2012 and the unloaded, open-carrying of long-guns was prohibited the same year.

The year after the San Bernardino mass shooting in 2015, law makers passed background checks on ammunition sales and created a new state database to record ammunition buyers.  Magazines which hold more than 10 rounds were also banned and the exchange of firearms between family members–who didn’t pass a background check–was prohibited.

State Gun Law Analysis 

State law makers implemented 11 comprehensive gun control policies after 5,424 Californians were killed by gunfire in 1993.  By 2013, the state’s firearm death total fell to 2,929, a 56.6 percent decline, 29.9 percent lower than the national average decline.  To be fair, the state’s population increased from 30 million to 37 million over the same period.  But today, California has the reputation of being the strictest gun law state in the country.

While national gun death rates fell during the 1990s and remained steady from 2000 to 2016, California’s gun death rate was 20 percent lower in 2016 than it was in 2000.  And this decrease accelerated from 1,878 in 2005 to 1,233 in 2014, a difference of 34 percent.  California had the 8th lowest rate of gun death rate the same year.  Gun ownership in California is just 20.1 percent, which is nearly 31 percent lower than the national gun ownership rate of 29.1 percent.

Prevention Is Paramount

Critics might argue more women should own more guns so they can defend themselves from domestic abusers: 56 percent of female gun owners believe owning a gun makes them safer.  But research suggests when a woman owns a gun, her chances of dying increased as a result.

Researchers analyzed data from 1991 to 1996 and concluded the mortality rate for women who owned a gun was twice the rate of women who didn’t.  They also found female gun owners were fifteen times more likely to die of firearm suicide than women who didn’t own guns.  Researchers at Harvard University agree, more guns equal more homicides.

Critics might also argue California’s domestic violence prevention and gun laws had nothing to do with the decrease in California domestic violence and gun violence rates, these two subjects decreased on their own because of a growing economy, employment, convicts were incarcerated and VAWA was passed in 1993.  There might be something to this point since fewer people are experiencing domestic violence today.

But this post hoc ergo proper hoc type criticism doesn’t negate the fact that domestic violence is 8.3 percent lower in California than the national average.  And it also doesn’t negate the fact that California has a gun homicide rate 29.9 percent lower than the national average.  I bet it was the combination of comprehensive domestic violence prevention and gun laws passed during the 1990s and 2000s which made California so successful in decreasing both of these crimes.

Critics might further argue, there are approximately as many guns as there are people in the U.S. and the vast majority of of Americans aren’t abused by gun owners each year, therefore domestic violence by guns isn’t a problem.  But to these critics, I’d argue, gun owners don’t have to actually abuse anyone for them to feel frightened, intimidated: one study found just knowing there is a firearm in the home is enough to make someone feel threatened.

Researchers measured the average gun ownership rate by state from 1981 to 2013 and found Wyoming had the highest rate at 73 percent.  They suggested, if Wyoming’s gun ownership rate decreased from 73 percent to 40 percent, there would be a 33 percent decrease in the murder rate among women.  This means, “the rate of female non-stranger homicide in a state can be predicted well simply by using the prevalence of firearm ownership in that state,” the authors wrote.

Experts often call domestic homicides the most predictable and preventable of all homicides because of the warning signs: 70 percent of the women who were killed by their partners were also abused by the same person before their deaths. “If we want to dramatically reduce the number of mass shootings, we could pay a lot more attention to domestic violence at an earlier stage,” said Kim Gandy, president of the National Network To End Domestic Violence.

Public Policy Recommendation: Domestic Violence Prevention

We’d be smart to continue developing, funding and studying domestic violence prevention programs.  Let’s foster programs that worked, like the National Domestic Violence Hotline and increase funding for them.  The federal government spends approximately $130 million per year on domestic violence prevention via the FVPSA, it should be increased until we reach the point where we have no return on the investment.

Let’s also prevent people from becoming domestic abusers in the first place by teaching adolescents how to respect others and to treat people how they want to be treated when they become adults.  This means teaching young men how to communicate effectively and express their emotions without becoming aggressive.  We should “raise boys and men so they know it’s fine to cry and to show fear or other ‘weakness,’ and that expressing anger is not the only acceptable emotion for males,” said Nancy Lemon, a law lecturer at the University of California, Berkeley.  According to Lemon, those most likely to become domestic abusers as adults are the ones who were victims of or witnesses to domestic violence as kids.

Furthermore, we need to make the penalties for domestic violence more consistent and firm.  Athletes, celebrities and musicians seem to not be held to the same standard as everyone else when it comes to domestic violence, take Ray Rice, Tom Sizemore and Chris Brown as examples.  I think incarceration, fines, probation, restraining orders, community service and counseling, without exceptions would be effective in curbing domestic violence.

We also need to change how family courts handle domestic violence cases, using a different judge to hear the divorce and another to hear the criminal domestic violence case.  Victims often have to adapt to multiple judges, each with a different process and courtroom.  And divorce judges might not hear information about domestic violence, and vice versa which only complicates matters.  We’d be smart to have one judge per criminal domestic violence hearing, divorce, child custody, etc.

Law makers, remember, women vote more than men do.  Therefore, it’s in your interest to help low income women become economically independent so they can leave an abusive relationship if they’re in one.

  • Congress passed the Equal Pay Act in 1963 but “At the median, women’s hourly wages are only 83 percent of men’s hourly wages.”  The income gap is worse for women who are also minorities, “Black men’s average hourly wages went from being 22.2 percent lower than those of white men in 1979 to being 31 percent lower by 2015.  For women, the wage gap went from 6 percent in 1979 to 19 percent in 2015.”
  • Increasing the federal minimum wage to $12 per hour by 2020 “would boost wages for one-fourth of the workforce, or 35 million working people—56 percent of whom are women.”  Increasing and eventually eliminating the subminimum wage for tipped workers, currently just $2.13 per hour would boost wages and stabilize incomes for millions of service workers.  “Two-thirds of tipped workers are women, yet they still make less than their male counterparts.  At the median, women tipped workers make $10.07 per hour, while men make $10.63, including tips.”
  • Providing families with paid family medical leave will enable them to take paid time off for the arrival of a child, serious health condition affecting themselves or a relative, without forcing them to choose between work and family.  “Only 12 percent of private-sector employees have access to paid family leave.”
  • Increasing the salary minimum to $50,440 per year “would directly benefit 13.5 million salaried workers—the majority of whom are women—by guaranteeing them the right to receive time-and-a-half pay for work beyond 40 hours each week that is now provided at no cost to the employer.”
  • Providing public healthcare, daycare, affordable public university tuition and housing subsidies would help women become economically independent too.

Public Policy Recommendation: Gun Violence Prevention

The absence of information about gun ownership precludes law enforcement officers and public policy makers from curbing domestic violence and gun violence.  Police officers don’t know whom owns a firearm, the Firearm Owner Protection Act prohibited the federal government from establishing a national database to record gun ownership information in 1986.  And current information about who owns guns is inadequate: not every state has a gun registry or gun sale database like California and some of those databases are incomplete.  Congress would be smart to repeal the FOPA and pass a national law requiring each state to develop gun ownership and sales databases.

The Supreme Court decided domestic abusers were prohibited from owning guns in the Voisine case in 2016.  Congress would be smart to build on this decision and pass a national policy to bar stalkers and people subject to restraining orders from owning guns as well.  They could go further and require all records of prohibited abusers, stalkers and people subject to restraining orders be provided to the FBI’s NICS system.

State law makers should also require a background check for all gun sales.  Sure, the Brady Law required gun dealers to check the background of each buyer.  But if the buyer’s background didn’t clear within 3 business days, the sale could still be completed.  Making the 3 day period indefinite until the buyer’s background check clears would ensure guns stay out of the hands of gun buyers who also happen to be domestic abusers.

State legislators should also pass permit to purchase laws before all gun sales to ensure the buyer’s background is checked during private sales.  When Connecticut law makers passed a permit to purchase gun law, gun homicides decreased by 40 percent from 1996 to 2005.  Missouri had the opposite effect: when their permit to purchase gun law was repealed in 2007, gun homicides increased by 23 percent.

Finally, state legislators and the national congress would be smart to expand funding for gun control research at the CDC.  Congress stripped $2.6 million of funding from the CDC in 1996, “Congress then passed a measure drafted by then-Rep. Jay Dickey, R-GA, forbidding the CDC to spend funds “to advocate or promote gun control.”  Repealing this law and funding the study of gun control is critical to reducing domestic violence and gun violence.  States should fund public universities to study gun control: California law makers appropriated $5 million to the University of California to study the subject in 2016.

Conclusion

California legislators took an activist public policy approach to preventing domestic violence and gun violence in the 1990s and 2000s.  Their policies were effective in decreasing both crimes.  If similar domestic violence and gun violence prevention laws were implemented at the national and state levels, they would likely be successful given California’s size and diverse population demographics.  This of course would require the national Congress and state Congresses to adopt the same activist role in public policy making to prevent more domestic violence and gun violence from happening in the future.  The sooner legislators do this, the better.