Category: tech contrarianism

Total 133 Posts

Why Learning Analytics Aren’t Like Netflix Recommendations

Bill Jerome, in an excellent post aimed at people who perceive an obvious connection between learning analytics and Netflix recommendations:

The more a user stays engaged with [Netflix and Amazon], the more profit they generate. The comparisons to those kinds of analytics pretty much end there. Unfortunately for those looking for the easy path, our outcomes are complex and the inputs aren’t actually that obvious either.

Then later:

Now what happens if we tell a student they aren’t achieving learning outcomes when in fact we are wrong about that? The potential for demotivating the student comes at a high cost. This could happen with errors in reporting the other way, as well. If learning analytics inform a student they are succeeding but in fact they are not prepared for their next exam or job, the disservice is just as bad. Getting learning analytics wrong on the learning dimension is a recipe for disaster and must be done carefully and with understanding.

As far as I’m concerned, between this post and Michael Feldstein’s earlier “A Taxonomy of Adaptive Analytic Strategies“, the e-Literate blog has cornered the market on nuance and insight in the learning analytics discussion.

BTW. Probably related: What We Can Learn About Learning From Khan Academy’s Source Code.

The Soaring Promise Of Big Data In Math Education

Stephanie Simon, reporting for Reuters on inBloom and SXSWedu:

Does Johnny have trouble converting decimals to fractions? The database will have recorded that – and may have recorded as well that he finds textbooks boring, adores animation and plays baseball after school. Personalized learning software can use that data to serve up a tailor-made math lesson, perhaps an animated game that uses baseball statistics to teach decimals.

Three observations:

One, it shouldn’t cost $100 million to figure out that Johnny thinks textbooks are boring.

Two, nowhere in this scenario do we find out why Johnny struggles to convert decimals to fractions. A qualified teacher could resolve that issue in a few minutes with a conversation, a few exercises, and a follow-up assessment. The computer, meanwhile, has a red x where the row labeled “Johnny” intersects the column labeled “Converting Decimals to Fractions.” It struggles to capture conceptual nuance.

Three, “adores” protests a little too much. “Adores” represents the hopes and dreams of the educational technology industry. The purveyors of math educational technology understand that Johnny hates their lecture videos, selected response questions, and behaviorist video games. They hope they can sprinkle some metadata across those experiences – ie. Johnny likes baseball; Johnny adores animation – and transform them.

But our efforts at personalization in math education have led all of our students to the same buffet line. Every station features the same horrible gruel but at its final station you can select your preferred seasoning for that gruel. Paprika, cumin, whatever, it’s yours. It may be the same gruel for Johnny afterwards, but Johnny adores paprika.

Featured Comment:

a different Dave:

Enjoyable games/activities in general are difficult to create, especially in any quantity. Learning and teaching are complicated and personal by necessity. The combination is exceptionally difficult. [..] It’s just not realistic for this to happen on any timetable or method I’ve seen proposed.

2013 Mar 11. Michael Feldstein links up this post and wires in a comprehensive “Taxonomy of Adaptive Analytics Strategies.”

Precious moments:

First of all, the sort of surface-level analysis we can get from applying machine learning techniques to the current data we have from digital education system is insufficient to do some of the most important diagnostic work that real human teachers do.

Then there are those systems where you just run machine learning algorithms against a large data set and see what pops up. This is where we see a lot of hocus pocus and promises of fabulous gains without a lot of concrete evidence. (I’m looking at you, Knewton.)

And guess what? Nobody’s been able to prove that any particular theory of learning styles is true. I think black box advocates latch onto video as an example because it’s easy to see which resources are videos. Since doing good learning analytics is hard, we often do easy learning analytics and pretend that they are good instead.

An Aggravating And Energizing Hypothetical

Andrew Leonard:

All we need is one superb remedial algebra course that can be effectively delivered online and, theoretically, the demand for a zillion remedial algebra courses taught at a zillion community colleges suddenly drops off a cliff.

This hypothetical drives me up the wall, oblivious as it is to all the very interesting things that can happen in a brick-and-mortar classroom that can’t yet happen on the Internet.

The Internet is like a round pipe. Lecture videos and machine-scored exercises are like round pegs. They pass easily from one end of the pipe to the other.

But there are square and triangular pegs: student-student and teacher-student relationships, arguments, open problems, performance tasks, projects, modeling, and rich assessments. These pegs, right now, do not flow through that round pipe well at all.

So I’m aggravated by the hypothetical and, especially, its seductive allure to money-men and policy-makers.

But it also energizes me. It makes our job rather clear, doesn’t it?

Promote the hell out of the square and triangular pegs.

Push them into the plain view of anybody who’d love to believe math education isn’t anything more than a set of round pegs ready for a trip down the round pipe.

[via]

Pattern Matching In Khan Academy

Stephanie H. Chang, one of Khan Academy’s software engineers:

I observed how some students made progress in exercises without necessarily demonstrating understanding of the underlying concepts. The practice of “pattern matching” is something that Ben Eater and Sal had mentioned on several occasions, but seeing some of it happening firsthand made a deeper impression on me.

The question of false positives looms large in any computer adaptive system. Can we trust that a student knows something when Khan Academy says the student knows that thing? (Pattern matching, after all, was one of Benny’s techniques for gaming Individually Prescribed Instruction, Khan Academy’s forerunner.)

It is encouraging that Khan Academy is aware of the issue, but machine-scorers remain susceptible to false positives in ways that skilled teachers are not. If we ask richer questions that require more than a selected response, teachers get better data, leading to better diagnoses. That’s not to say we shouldn’t put machines to work for us. We should. One premise of my work with Dave Major is that the machines should ask rich questions but not assess them, instead sending the responses quickly and neatly over to the teacher who can sequence, select, and assess them.

BTW. Also from Chang’s blog: a photo of Summit San Jose’s laptop lab, a lab which seems at least superficially similar to Rocketship’s Learning Lab. My understanding is that Summit’s laptop lab is staffed with credentialed teachers, not hourly-wage tutors as with Rocketship. Which is good, but I’m still uncomfortable with this kind of interaction between students and mathematics.

[via reader Kevin Hall]

Featured Comment

Stephanie H. Chang responds:

We think the work you’re doing with Dave Majors is really exciting and inspiring. Open-ended questions and peer- or coach-graded assignments are incredibly powerful learning tools and my colleagues at KA don’t disagree. We definitely have plans to incorporate them in the future.

Mg:

My old school last year relied on a teaching model where the students had to try and teach themselves a lot of math by utilizing classroom resources. A lot of the practice was through Khan Academy or by students completing practice problems with accessible answer keys. Ultimately what happened was that the students only looked for patterns and had no conceptual understanding of the math at all. Even worse was that students who had “mastered” the concept were encouraged to teach the other students how to solve problems but they could only do so in the most superficial manner posssible.

Bowen Kerins:

One way sites like Khan (and classroom teachers) can deal with this is by retesting – say, three months later, can a student solve the same problem they solved today? If not, they clearly only had a surface-level understanding or worse.

I’d like to see Khan or other sites force students to retest on topics that were marked as “completed”. But then again, I feel pretty much the same way about miniquiz-style Standards Based Grading.

jsb16:

Reminds me of the story about the tank-recognizing computer. I doubt we’ll have worthwhile computer scoring that isn’t susceptible to pattern-matching until we have genuine artificial intelligence.

And then the computers will want days off, just as teachers do.

Noam:

KA does force review of concepts after mastery is achieved, generally a few weeks after completion. Problem is, doesn’t take students long to do the pattern matching again.

We instituted a policy where students must make their own KA style videos explaining how to solve a set of problems that they struggled with. Best way we found to deal with the issue.

Zack Miller, comments on the laptop lab at Summit where he teachers math:

Our math model as described as concisely as possible: students spend two hours per day on math; one hour in breakout rooms and one hour in the big room (seen in your picture) where students are working independently. In the breakout rooms, students work on challenging tasks and projects (many of which we can thank you for) that develop the standards of math practice, often in groups and with varying amounts of teacher structure. Development of cognitive skills via frequent exposure to these types of tasks is paramount to our program. It is also in the breakout rooms where students’ independent work — which is mostly procedural practice — is framed and put in context. Students’ know that their work in the big room supports what they do in the seminar rooms and vice versa.

Is This Press Release From 2012 or 1972?

130105_1

Here are five quotes, some of which are from edtech startups in 2012 while others are from an advertorial for “Individually Prescribed Instruction” published in ASCD in 1972. Can you tell them apart?

#1

Educators and parents across the country seem to agree that a system of individualized instruction is much needed in our schools today. This has been evident to any parent who has raised more than one child and to every teacher who has stood in front of a class.

#2

[This product] allows the teacher to monitor the child’s progress but more important it allows each child to monitor his own behavior in a particular subject.

#3

The objectives of the system are to permit student mastery of instructional content at individual learning rates and ensure active student involvement in the learning process.

#4

This is a step towards the superior classroom, because the system includes material that can be used independently, allowing each child to learn at his own rate and realize success.

#5

The technology, training program, and management technique give the teacher tools for assessment, mastery measurement, and specified management techniques.

Okay, they’re all from 1972, from a piece called “Do Schools Need IPI? Yes!” [pdf]. But really the only line that’s obviously out of the past is:

The aide’s most important functions is the scoring, recording, and filing of students’ test and skill sheets.

Computers now handle that scoring, recording, and filing. But in every other way, you could have ripped the text of that article from a Techcrunch article or New Schools prospectus.

I’m not merely snarking that what we think is new and great isn’t so new. I’m also saying it still isn’t great. Stanley Erlwanger wrote an incredible piece in 1973 illustrating how easy it was for a student named Benny to appear successful in IPI while actually knowing very little. Both in 1972 and in 2012, these systems ask questions that are trivial enough to be gamed. The only difference is that instead of writing questions to accommodate the limitations of a human-scorer, we’re now writing questions to accommodate the limitations of a machine-scorer.

If you’re in this industry, read those papers close enough that you can tell yourself, “I understand why IPI failed. This is how we’re different.” Basically, IPI is a free failure for you and your company. I hope you won’t pass it up.

BTW. Justin Reich points me to the opposing piece from the same ASCD issue:

While some persons see the IPI program as aimed in the direction of “humanness and openness,” I consider its implementation a step in the opposite direction for many schools. For more than 50 years, many recognized leaders in education have worked to move learning opportunities provided in our schools from “rigid, passive, rote, and narrow” to “open and humane.”

2013 Jan 12. Mike Caulfield again points out that personalized learning may have an isolating effect on students who really need to have their assumptions tested by their peers:

Benny, the student the study is about, has some odd ideas about mathematics, induced by peculiarities of the testing system. But he’ll never know they are odd because the individualized instruction makes discussion with peers impossible.

2013 Jan 13. Mary had a positive experience with IPI and highlights the efforts her teacher took to keep the program from isolating students with their misconceptions:

I was educated using IPI from K-4. IPI allowed me to work at my own pace, which tended to be faster than average in Math and about average in the reading. When I moved to a district that did not use it, I was devastated. I hated the non-IPI system and was bored and annoyed with math for the next three years. Since This was so devastating to me, I clung to my IPI materials and I still have some all these years later. I use them and my experiences to balance the discussion we have in my graduate class when we discuss the Benny Paper. You see, to me, IPI was not a failure, the way Benny’s teacher implemented it was. Teachers still had to teach when using IPI or of course it would be a failure. My experience with IPI was different in key ways than The Benny paper describes… the teachers would set up table groups each week based upon what book we were working on. Along with working independently through the workbook and tests, the students were required to discuss a question provided by the teacher and s/he would ask each group to stop and discuss it at a particular time so s/he could be there to listen in. In addition to this, after each unit test, we had a brief one-on-one meeting with the teacher to discuss the content, where according to my old handwriting, I was being asked targeted questions where I needed to explain my reasoning. In other words, my teachers did their own assessments and did not rely on bubble sheets. True the initial presentation of the material came through the workbook, and it’s true such a system would not engage all students all the time, but that’s where teachers come in. Teachers need to know their students. Teachers need flexibility day by day, student by student, to use or not use these tools. Allowing students to move through material at their own pace is still a good goal. Giving teachers tools to help them manage that is a good goal. Devising tools that remove teachers from the process is where we go wrong.