Grade 12 Law Essay Software

On By In 1

As the government begins its crackdown on essay mill websites, it’s easy to see just how much pressure students are under to get top grades for their coursework these days. But writing a high-scoring paper doesn’t need to be complicated. We spoke to experts to get some simple techniques that will raise your writing game.

Tim Squirrell is a PhD student at the University of Edinburgh, and is teaching for the first time this year. When he was asked to deliver sessions on the art of essay-writing, he decided to publish a comprehensive (and brilliant) blog on the topic, offering wisdom gleaned from turning out two or three essays a week for his own undergraduate degree.

“There is a knack to it,” he says. “It took me until my second or third year at Cambridge to work it out. No one tells you how to put together an argument and push yourself from a 60 to a 70, but once you to get grips with how you’re meant to construct them, it’s simple.”

'I felt guilty when I got my results': your stories of buying essays | Guardian readers and Sarah Marsh

Poke holes

The goal of writing any essay is to show that you can think critically about the material at hand (whatever it may be). This means going beyond regurgitating what you’ve read; if you’re just repeating other people’s arguments, you’re never going to trouble the upper end of the marking scale.

“You need to be using your higher cognitive abilities,” says Bryan Greetham, author of the bestselling How to Write Better Essays. “You’re not just showing understanding and recall, but analysing and synthesising ideas from different sources, then critically evaluating them. That’s where the marks lie.”

But what does critical evaluation actually look like? According to Squirrell, it’s simple: you need to “poke holes” in the texts you’re exploring and work out the ways in which “the authors aren’t perfect”.

“That can be an intimidating idea,” he says. “You’re reading something that someone has probably spent their career studying, so how can you, as an undergraduate, critique it?

“The answer is that you’re not going to discover some gaping flaw in Foucault’s History of Sexuality Volume 3, but you are going to be able to say: ‘There are issues with these certain accounts, here is how you might resolve those’. That’s the difference between a 60-something essay and a 70-something essay.”

Critique your own arguments

Once you’ve cast a critical eye over the texts, you should turn it back on your own arguments. This may feel like going against the grain of what you’ve learned about writing academic essays, but it’s the key to drawing out developed points.

“We’re taught at an early age to present both sides of the argument,” Squirrell continues. “Then you get to university and you’re told to present one side of the argument and sustain it throughout the piece. But that’s not quite it: you need to figure out what the strongest objections to your own argument would be. Write them and try to respond to them, so you become aware of flaws in your reasoning. Every argument has its limits and if you can try and explore those, the markers will often reward that.”

Applying to university? It's time to narrow your choices down to two

Fine, use Wikipedia then

The use of Wikipedia for research is a controversial topic among academics, with many advising their students to stay away from the site altogether.

“I genuinely disagree,” says Squirrell. “Those on the other side say that you can’t know who has written it, what they had in mind, what their biases are. But if you’re just trying to get a handle on a subject, or you want to find a scattering of secondary sources, it can be quite useful. I would only recommend it as either a primer or a last resort, but it does have its place.”

Focus your reading

Reading lists can be a hindrance as well as a help. They should be your first port of call for guidance, but they aren’t to-do lists. A book may be listed, but that doesn’t mean you need to absorb the whole thing.

Squirrell advises reading the introduction and conclusion and a relevant chapter but no more. “Otherwise you won’t actually get anything out of it because you’re trying to plough your way through a 300-page monograph,” he says.

You also need to store the information you’re gathering in a helpful, systematic way. Bryan Greetham recommends a digital update of his old-school “project box” approach.

“I have a box to catch all of those small things – a figure, a quotation, something interesting someone says – I’ll write them down and put them in the box so I don’t lose them. Then when I come to write, I have all of my material.”

There are a plenty of online offerings to help with this, such as the project management app Scrivener and referencing tool Zotero, and, for the procrastinators, there are productivity programmes like Self Control, which allow users to block certain websites from their computers for a set period.

Essays for sale: the booming online industry in writing academic work to order

Look beyond the reading list

“This is comparatively easy to do,” says Squirrell. “Look at the citations used in the text, put them in Google Scholar, read the abstracts and decide whether they’re worth reading. Then you can look on Google Scholar at other papers that have cited the work you’re writing about – some of those will be useful. But quality matters more than quantity.”

And finally, the introduction

The old trick of dealing with your introduction last is common knowledge, but it seems few have really mastered the art of writing an effective opener.

“Introductions are the easiest things in the world to get right and nobody does it properly,” Squirrel says. “It should be ‘Here is the argument I am going to make, I am going to substantiate this with three or four strands of argumentation, drawing upon these theorists, who say these things, and I will conclude with some thoughts on this area and how it might clarify our understanding of this phenomenon.’ You should be able to encapsulate it in 100 words or so. That’s literally it.”

Keep up with the latest on Guardian Students: follow us on Twitter at @GdnStudents – and become a member to receive exclusive benefits and our weekly newsletter.

April 30 marks the deadline for a contest challenging software developers to create an automated scorer of student essays, otherwise known as a roboreader, that performs as good as a human expert grader. In January, the Hewlett Foundation of Hewlett-Packard fame introduced the Automated Student Assessment Prize (ASAP…get it?) offering up $100,000 in awards to “data scientists and machine learning specialists” to develop the application. In sponsoring this contest, the Foundation has two goals in mind: improve the standardized testing industry and advance technology in public education.

The contest is only the first of three, with the others aimed at developing automated graders for short answers and charts and graphs. But the first challenge for the nearly 150 teams participating is to prove their software has the spell checking capabilities of Google, the insights of Grammar Girl, and the English language chops of Strunk’s Elements of Style. Yet the stakes are much higher for developing automated essay scoring software than the relatively paltry $60,000 first-place prize reflects.

Developers of reliable roboreaders will not just rake in massive loads of cash thrown at them by standardized testing companies, educational publishers, and school districts, but they’ll potentially change the way writing is taught forever.

It’s hard to comprehend exactly how transformative the Bush Administration’s No Child Left Behind reform effort has been to U.S. education, but one consequence of the law is that it fueled reliance on standardized testing to gauge student performance. In 2001, the size of the industry was somewhere in the $400-700 million range, but today, K-12 testing is estimated to be a $2.7 billion industry. Even as the stakes are incredibly high for school districts to perform well on tests or lose funding, the technology used to grade the exams is fundamentally the same that’s been around for over 50 years. That’s right, it’s optical mark recognition, better known as scantron machines, which work incredibly well for true/false and multiple choice questions.

But many tests include some assessment of student writing and for those sections, testing companies need human graders. A recent exposé of this industry reveals that tens of thousands of  scorers are employed temporarily in the spring to grade these test sections under high stress in “essay-scoring sweatshops.” It’s sobering to think that the measure of a school’s performance rests on the judgment of a solitary temp worker earning between $11 and $13 an hour plowing through stacks of exams to assess responses to the prompt “What’s your goal in life?”.

That’s not to say that these temps aren’t qualified or don’t work hard, or even that their jobs are expendable. However, when hundreds or even thousands of children’s lives, as well as parents, teachers, and administrators, can be disrupted by receiving poor marks for essay writing, the development of a roboreader that can grade like a human being is clearly long overdue.

Although the demand for automated scoring software is about bringing greater reliability to standardized testing, the same technology could quickly become a teacher’s best friend. That’s because if you ask educators what the worst part of their job is, they’ll likely say “grading”, especially if they utilize short- and long-answer questions or they teach English and must read student essays. Because of this, it’s common for teachers to rely on assessments that lend themselves to easier scoring, such as true/false, multiple choice, fill-in-the-blank, and matching questions. Educators can’t really be faulted for this, especially as classroom sizes have grown as has the demand for more assessments.

Yet students need to improve their writing skills now more than ever. Studies from the National Center for Educational Statistics (NCES) conducted since 1998 consistently show that 4 out of 5 12th grade students in the US are not proficient writers and approximately 1 in 5 are not even basic writers:

Unfortunately, digital technology has made everyone’s writing skills open for public evaluation and critique. Whether it’s emails, tweets, status updates, comments, threads, ebay descriptions, blog posts, resumes, product reviews, or articles, we are increasingly writing and being read. In the information age, writing is our primary means of expression and communication in personal, professional, and public forums. As mobile use increases and our lives become more digital by the day, the need for strong writing increases. And for nearly everyone, the foundation for this vital skill set is built from years of writing assignments during the first two decades of life.

Still, there has been resistance to roboreaders, which is primarily rooted in the fear of two outcomes: they’ll make teachers either lazy or obsolete. One only needs to recall an easily overlooked technology to see how these fears are unfounded. Spell checkers. Around since before even PCs existed (first was developed in 1966 at MIT), spell checkers were quickly made a staple of word processors in the 1980s. They’re so standard in programs now that the only critique anyone really has is when they aren’t 100 percent accurate (as has been captured for your viewing pleasure by Damn You, Autocorrect). Spell checkers have been criticized for making students poorer spellers, but perhaps that’s because the technology hasn’t been utilized directly to help people improve their spelling. Or maybe, like so many other things, correct spelling is something humans can relegate to computers and not really lose out on anything.

Spelling, grammar, sentence and paragraph structure, and even style are the machinery of language that follows hierarchies of rules, which translate well into logic-based computer code. Additionally, back in 2007, Peter Norvig of Google fame demonstrated how a spell checker could be written in 20 lines of Python code using probability theory. Consider how Wolfram Alpha has automated the understanding of language, fueling voice-responsive apps like Siri into common use with similar apps like Evi on the horizon. So through the combination of various approaches, assessing the quality of someone’s writing is ripe for automation with the proper algorithms. Inherently, roboreaders are no different than any other form of automation. They are effectively taking over the manual labor of language analysis, which will allow English teachers the freedom to focus on what they are passionate about anyway: creativity, ideas, and patterns of thinking, all of which may be what differentiates humans from the machines of the future.

Education desperately needs tools to help improve student writing. Given the present economy and school budgets across the U.S., it is clear that the solution must be efficient, cost effective, widely available, versatile, and easy to use. And roboreaders look like the best bet. If the Hewlett Foundation’s contest can lead to roboreaders that can perform better than temp workers, they can be modified for adoption by educators to help in grading, which will ultimately help students become better writers.

And just maybe, the same technology will be accessible to anyone who writes extensively in the digital space, helping everyone to improve their writing. For the sake of the English language on the web, here’s hoping the contest achieves its goals.

[Media: Eduify]

[Sources: Citypages, Kaggle, PBS, Reuters]

I've been writing for Singularity Hub since 2011 and have been Editor-in-Chief since 2014. My interests cover digital education, publishing, and media, but I'll always be a chemist at heart.


Leave a Reply

Your email address will not be published. Required fields are marked *