A recent article on inverse.com says that autonomous cars have the potential to end car wrecks as we know them, which could save 300,000 lives per decade.
One of the leaders in this movement is Elon Musk. I have been a fan of Elon Musk for several years now. Musk is the head of Tesla, SpaceX, SolarCity, and numerous other projects in which he and his teams approach big problems with original thinking and audacious goals.
In my own work as a prevention consultant, I have been inspired by Musk to break down the problem of sexual harassment and assault and consider alternative ways to approach preventing it.
We have light bulbs, smartphones, and the ability to go to the moon (and self-driving cars) we should also be able to figure out how to get sane people to not violate others in ways that cause life-altering emotional trauma. We should be able virtually eliminate behavior that causes our schools, campuses, military, and places of work disturbing places to be for many women and girls (and some men and boys as well).
After learning of the experience of someone close to me many years ago, the realities of this issue weighed heavily on me, and that did not subside.
Eventually, I realized I could do something about it. I have been studying the issue of sexual violence and misconduct for fifteen years, and working exclusively on the topic for six as a speaker and curriculum developer.
Breakthroughs & Leaps
Countless women and a number of men have made the world a better place by bringing much greater attention to the seriousness of sexual harassment in the workplace and sexual misconduct of all types. The problem, however, is that despite the awareness, these issues persist.
I believe we now must make a dramatic leap in how we prevent it. I don’t want to just make incremental, gradual progress on this issue. I want us to solve it.
Behavior that causes life-altering emotional trauma must be met with a proportional response, not just after serious harm occurs but also with the implementation of systems that effectively prevent it.
The self-driving car is an example of a dramatic leap. And it occurred to me recently that there are many parallels between designing self-driving cars and the way we at Prevention Culture approach preventing sexual misconduct.
How Do Self-Driving Cars Work?
Essentially, the car “sees.”
What does the car see? The car recognizes objects within a relevant range, it categorizes those objects (truck, parking meter, pedestrian, etc.), and makes choices accordingly.
According to an article at makeuseof.com the system on most self-driving vehicles includes the following:
- Radar Sensors
- LIDAR (Laser Illuminating Detection and Panging) except in Tesla’s, which uses a computer vision-based detection system and isn’t intended to be used completely hands-free at all times.
- High-Powered Cameras
- Sophisticated Software
These technological tools work in a combined fashion to make the self-driving car work, and “work” essentially means driving to a destination without running into things.
They work by having a system that supports one goal. It is not just one thing that makes it work; many things are working together to support one prioritized goal.
Why They Work
How self-driving cars work is about technology, but why they work has to do with its infallible morality. To function properly, the car’s brakes and steering needs to operate as if it is a “moral” vehicle that view the world around it in a certain way, navigating its route while avoiding causing harm to others.
You don’t have to program the car to do everything. You only need to focus on one thing—it essentially needs to not run into anything.
It is this narrow focus that helps with the design of self-driving cars, and it is a key principle of designing optimally effective systems of prevention.
Comparing Self-Driving Cars with Self-Navigating Humans
As I spell out how a self-driving car has a type of morality, you might notice there is a difference between this moral programming and how self-navigating humans sometimes operate in relation to other people.
Self-driving cars by scientific design have a perfect moral compass. People don’t. And it’s in the ways that humans are more likely to fail to act morally and ethically that we find points of leverage for influencing behavior and preventing harm.
(It may be helpful to remember that there is a broad spectrum of behavior that falls under sexual misconduct, ranging from unethical and violating behaviors that involve physical contact and those that do not, such as verbal sexual harassment, coercion, and unethical uses of technology, to name just a few).
Self-Driving Cars: Priority One – Do No Harm
A self-driving car is programmed to have an absolute concern for not causing harm to any person, with no regard for that person’s background, appearance, reputation, previous behavior, gender, sexuality, heritage, or any other aspect of that human being.
Humans: Social psychology is just one discipline that can explain why otherwise decent humans can fail to act morally. Albert Bandura is perhaps the greatest social psychologist, living or dead, and his Moral Disengagement Theory helps explain that although virtually all people agree on the same virtues and values in the abstract, in real-world situations there are multiple ways they can fail to “self-regulate” and manage themselves ethically because they fail to make the connection between their actions and the harm it could cause.
People are great at minimizing or entirely disregarding the harm done, whether it’s their own behavior or the behavior of “one of their own,” or a person with whom they can identify.
Self-Driving Cars: No Level of Harm is Acceptable
A self-driving car has a standard that zero harm is acceptable, regardless of circumstance. And this standard is ensured by maintaining the same level of respect for human life for all people.
Humans: People make themselves feel better about some behaviors by comparing them with something much worse. Bandura calls this “advantageous comparison,” and it can be done consciously or without even realizing it.
Multiple types of sexually violating behaviors do not involve obvious and overt physical violence (though the emotional harm can be the same or even greater), allowing for a potential disconnect and rationalization. This is how an otherwise reasonable and decent person can fail to perceive their verbal coercion, emotional manipulation, or unethical use of technology as wrong and damaging.
Self-Driving Cars: Responsibility is Absolute
A self-driving car operates with an unconditional acceptance of responsibility for the effects of its choices, regardless of the complexity of the situation.
Humans: People can be influenced by their perception of the other person(s) involved, their peer group, and any number of other things that result in them feeling less than completely responsible for their actions and their effects on another person.
For example, an athlete or fraternity member who is leading a hazing activity can tell himself that the almighty “tradition” is responsible, or that the people older than him are responsible for endorsing, condoning, or insisting on the activity. He also might tell himself it’s not only not a bad thing, but a very good thing, which Bandura calls “moral justification,” because he perceives it as developing team unity and a way to build stronger men. (If that is a person’s perception, why wouldn’t they feel it’s okay or even a great idea to continue that tradition?)
If a person approaches a physically intimate situation operating on the belief that it’s the other person’s responsibility to convince him that a certain act is not wanted, (which would be conveniently based on his judgment of what’s convincing) he might rationalize aggressive behavior as normal and feel fine about actions that could cause real emotional harm to the other person.
Self-Driving Cars: Each Human Life is Equally Sacred, Regardless of Circumstance
A self-driving car is determining where human life is in its environment, not who a person is or what kind of person it is. It respects and values each human life implicitly so it can adjust and regulate its position to make sure it causes no type or level of harm.
A self-driving car does not rank the importance of people based on biography, rumors, surface level appearance, or other factors.
Humans: As Bandura points out, people can devalue or dehumanize others. This can result in an otherwise “regular” person feeling less inhibited or less careful (or less caring about the other’s feelings and boundaries) with a person they perceive, consciously or not, as inferior or less deserving of respect. Jonathan Haidt’s moral foundations theory explains how some people can feel less empathy or concern for those perceived to be outside of their group or their kind of person.
One reason it is wrong for parents or educators to appear to condone labeling a girl or woman as a “slut” is because it feeds this view of devaluing and dehumanizing a person.
The above summary of ways people fail morally is just a brief set of examples that explain why strategic education is essential to reduce and prevent harmful behavior.
(Re)Programming Self-Navigating Humans
To design a self-driving car, you’d need to figure out how to effectively “influence the behavior” of a self-driving automobile. And we are all tasked with effectively influencing the behavior of self-navigating humans as it relates to inappropriate and harmful behaviors.
These human tendencies of moral disconnect are serious weaknesses in the system. They are the weakest links that cause the moral chain to break. When I realized this, however, I became quite optimistic, because this provides us with a powerful insight. It shows us where the weakest links are. It serves as a type of diagnoses for pinpointing where breakdowns are likely to occur.
This ability to find where the weakest links are allows us to make a leap in designing more effective prevention education.
These weakest links are also critical for understanding why common approaches of reiterating policies and consequences can fail to prevent harm. If they don’t connect back to the points of disconnect then a person can imagine himself to be a good guy (who doesn’t?), act like a decent person most of the time, and then fail to navigate his social environment without causing harm to another person.
A Fleet is Coming to You
There is a new fleet of new-used self-driving “vehicles” coming to every school, campus, organization, and workplace each year. It matters what their programming is, and it matters that we have systems that can “re-program” them to make for a safer and better culture.
How do you influence the behavior of people? Is it by writing policies and then explaining those policies? Is it by having painful consequences for violating those policies? Is it by telling young men to be gentlemen and to treat others with respect, or by telling everyone to treat everyone with respect?
Each of these has a place—each can be a piece of a larger system—but history has proven that none of them are sufficient.
And Bandura’s work is just one theory from social science that explains why. Jonathan Haidt’s work on moral foundations also explains how otherwise decent people can have different priorities and unconscious biases that can lead to immoral behavior or a lack of concern when they are needed to help. George Lakoff’s cognitive linguistics work on moral worldviews explains how certain messaging and facts will fail to connect with those we most need to reach.
Each of these frameworks support the notion that education is essential to expect meaningful change, but also that the status quo approach of explaining statistics, policies on consent, and consequences for violating policies is not the most effective way to reach those we need to reach.
It takes a system of strategically designed educational approaches to make our communities as safe as they need to be.
The human mind and heart is too complex for us to have it as easy as explaining policies on consent and reiterating the rules. There are too many ways people disconnect their actions from the harm it can cause, and often without realizing it.
The Good News
One piece of good news is that it can be easier and more effective to address both preventing unethical, unhealthy behavior and equipping people for ethical, positive behavior at the same time.
Chew on that for a minute. We can accomplish multiple goals at once, in addition to the most critical goal of preventing very serious harm. For example, The Complete Strength approach for athletes and athletics departments is designed to prepare them for positive personal lives while also preventing sexual misconduct and other types of mistreatment.
We are not just trying to prevent felonies; we want to prevent all types of interpersonal harm and equip people for great personal lives and relationships of all kinds.
In future writings, I will lay out what a system of preventing sexual misconduct and other interpersonal harm should look like.
Because for prevention education and messaging to be effective, it needs to accomplish two things:
- It must gain access so it can reach and connect with each person, reaching men as well or better than it reaches women.
- It must influence behavior.
We must influence behavior, because it’s behavior that either helps or harms. It is behavior that is either ethical or unethical, healthy or unhealthy.
Education on the issue is relevant to the extent that it can influence behavior in the ways that matter most. And we cannot influence behavior to the extent that we need to if our message is not well-received by those who need to hear it.
What if a self-driving car would not let you work on it?
This imperative to reach and connect presents challenges, but it also helps guide our strategies.
It requires that we craft messaging carefully and approach prevention in a certain way to optimize our impact.
The good news is there are things we can do to build safer social cultures that also equip young people for more positive personal lives.
But until we do…
The reality is that until we implement more effective systems, there is little reason to expect humans to operate differently than they have in the past.
People understand why a self-driving car would need to be carefully programmed and consistently maintained, but the same is true for people as well.
Author information: Aaron Boe, M.S.Ed. is the founder of Prevention Culture http://www.preventionculture.com and the creator of The Complete Strength System for student-athletes and pro athletes http://www.completestrength.org. He can be reached at firstname.lastname@example.org with questions or for more information on consulting, curriculum, or speaking engagements. You may also contact Prevention Culture to continue the conversation on strategic approaches to preventing harm.
Special thanks to our team member Abbe Erle for her help as an editor on this post.
Bandura, Albert. Moral Disengagement. New York: Worth Publishers, 2016. Print.
Haidt, Jonathan. The Righteous Mind. New York: Vintage Books, 2012. Print.
Lakoff, George. Don’t Think of an Elephant!. White River Junction: Chelsea Green, 2014. Print.