Adjusting Your Approach on the Sept/Oct PF Topic | Champion Briefs
← Back to Blog

October 6, 2016

Adjusting Your Approach on the Sept/Oct PF Topic

By Alekh Kale

With a few tournaments out of the way, I suspect that most of those reading this post will have a good idea what probable cause is, know the difference between reasonable suspicion and probable cause, and know what SRO stands for. If this does not describe you, I highly suggest you get your hands on our briefs for this month and read the multiple topic analyses and look through the large depth of evidence our writers have compiled for “Septober.” If you are not familiar with the topic yet, this post is not for you. I don’t intend to give a broad overview of the topic, but rather my opinions on arguments people are running, what I think works and doesn’t work in front of all types of judges, and how you can take your argumentation to the next level for the coming month. As a disclaimer, I think any offensive argument can be run well and win rounds, below are just my opinions on some that are better than others.

To start, I think a problem a lot of teams are experiencing is going for impact scenarios that are incredibly limited in magnitude and scope. For instance, while I do think that racial discrimination occurs under reasonable suspicion, very few teams do a good job of explaining the degree to which probable cause would end biased searching. Those that do, are left with impacts that are hard to weigh considering that minority students face a whole host of other issues with the school to prison pipeline other than just being searched. Another argument that falls victim to this is anonymous tips. By the time teams use their limited amount of words in case to describe why probable cause wouldn’t allow for anonymous tips, (a link that I’m sure many of you have found answers to), the impact that most teams end up finishing with describes anonymous tips being successful in an isolated portion of the United States.

Unfortunately, many teams go to the other extreme and make outlandish claims about what searching a student does to them. The idea that being searched can make someone more likely to become a delinquent because of a self-fulfilling prophecy is one that requires a lot of psychological research to prove causality with. And even if some authors have done the heavy lifting and have designed amazing experiments to prove it to be true, for most judges, lay and flow alike, it’s crucial to prove why an argument is true beyond having a source that backs it up. Many teams are going the typical route of using numbers in lieu of more compelling argumentation. “One point on the teacher fairness scale”, for instance, does not sound very persuasive to a judge who is inexperienced in debate and, to a judge who is experienced in debate, probably is harder to evaluate unless a team goes through the work of explaining what a one point increase in fairness actually means. Again, most teams aren’t doing a great job of proving causality when, for instance, issues at home or bullying at school could be the reason students both don’t trust their teachers and are more likely to act out, or a school dress code policy or bad grading could trigger the same harm.

The numbers issue applies to a whole host of other arguments, SROs, to the school to prison pipeline, to trust, you name it. But on a topic that is so limited in direct literature, these numbers probably aren’t going to be what wins or loses you rounds. In judging, I’ve seen rounds where teams run the same impact cards word for word and ultimately I’m left deciding the round on strength of link and strength of narrative over which team decides to extend the Johnson or James evidence. Interestingly enough, many teams are making two errors in first choosing a limited link story like discriminatory searching and then choosing impacts that are so vague they’re hard to weigh at the end of the round.

For many judges, lay and flow alike, this topic is esoteric, at least in the first few rounds of judging. I’ve talked to teams that have either been dropped because a judge thought that probable cause was a lower evidentiary standard than reasonable suspicion or who have thought it necessary to ask a judge’s experience on the topic before the round starts in order to make adjustments such as internally defining terms. Very few people outside of the realm of education and justice are well versed in school searches and school discipline. In front of a college student who’s just judging for a tournament to help his old team, or someone who hasn’t had any experience with the community, simply throwing around terms without explanation, or creating narrow link chains that lead to vague impacts is going to cost teams rounds.

The solution is getting back to the fundamentals of the resolution - safety and privacy. Many teams have already discovered the strategy of running that schools will think that everything is unsafe as a result of probable cause and turn to harsh reactionary policies in case while in rebuttal running reasons as to why searches don’t actually decrease when negating the resolution. This has proven to be an incredibly effective strategy because the story is consistent even from case to rebuttal. Teams are building up a strong narrative as they go and are consequently explaining arguments better. Strong affirmative teams will find their cases focusing heavily on privacy violations and the impacts those might have on the student psyche or even the world outside of school. The idea that students are socially conditioned to accept these violations as adults or resent authority can be incredibly persuasive to any judge if the right rhetoric is used.

The point is, going bigger picture with the resolution will do teams some good. Using strong rhetoric as opposed to vague numbers and humanizing impacts more to explain exactly what will happen in an affirmative or negative world in a compelling way will serve teams much better than stat spinning, especially when odds are, your opponents are running the exact same numbers you cut out against you. Focusing on strength of link from case onward instead of throwing out a bunch of impacts is something I think is winning teams rounds.

Going into the October, I think the teams that end up winning large tournaments such as Bronx are going to be the ones that go in depth with their links and use explanation and strong rhetoric to show how those specific links will trigger whatever harms or advantages they decide to run.

With that being said, I think teams are also approaching comparison in the last two speeches incorrectly. A lot of teams go the route of trying to outweigh their opponents impacts when they link into exactly the same thing. Saying “racism outweighs everything else for these 7 reasons” isn’t useful when your opponents are running securitization on neg with exactly the same impact cards you already read. Instead, this time would be better spent tackling why your link is more true than your opponents. Not just why what they are saying is false, but why what you have already presented in case is more factually correct. Taking the equivalent of the time used to put one more answer on an argument to instead reference an argument you think links into the same impact and say why it is more true than the argument you just attacked will do some good. For instance, if an affirmative team is running an argument about probable cause increasing trust and you’re running securitization on the negative side and you both agree that institutional trust is a good thing, a sentence such as “while the odds of a student getting searched on a given day might be lowered, most students probably aren’t going to be searched every day anyways, but metal detector gates affect everyone every day” or conversely “if everyone walks through a metal detector, it won’t decrease trust because no one is being singled out but individual searches are always going to change a student’s opinion about authority” can do a good job of clarifying early on who is winning the impact. This can be done in summary as well. It’s still weighing, just not at the impact level.

Ultimately if you take nothing else away from this lengthy blog post but this, I’ll be satisfied - compare links early on and explain and humanize the impacts you run in case. Best of luck in October.