<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	
	>
<channel>
	<title>
	Comments on: Teacher Data Dashboards Are Hard, Pt. 2	</title>
	<atom:link href="/2013/teacher-data-dashboards-are-hard-pt-2/feed/" rel="self" type="application/rss+xml" />
	<link>/2013/teacher-data-dashboards-are-hard-pt-2/</link>
	<description>less helpful</description>
	<lastBuildDate>Fri, 25 Apr 2014 14:19:36 +0000</lastBuildDate>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.8.2</generator>
	<item>
		<title>
		By: dy/dan &#187; Blog Archive &#187; Teacher Data Dashboards Are Hard, Pt. 1		</title>
		<link>/2013/teacher-data-dashboards-are-hard-pt-2/#comment-1604331</link>

		<dc:creator><![CDATA[dy/dan &#187; Blog Archive &#187; Teacher Data Dashboards Are Hard, Pt. 1]]></dc:creator>
		<pubDate>Fri, 25 Apr 2014 14:19:15 +0000</pubDate>
		<guid isPermaLink="false">/?p=17265#comment-1604331</guid>

					<description><![CDATA[[&#8230;] 2013 Sep 12. Part two. [&#8230;]]]></description>
			<content:encoded><![CDATA[<p>[&#8230;] 2013 Sep 12. Part two. [&#8230;]</p>
]]></content:encoded>
		
			</item>
		<item>
		<title>
		By: Dan Meyer		</title>
		<link>/2013/teacher-data-dashboards-are-hard-pt-2/#comment-1017420</link>

		<dc:creator><![CDATA[Dan Meyer]]></dc:creator>
		<pubDate>Sat, 21 Sep 2013 20:04:40 +0000</pubDate>
		<guid isPermaLink="false">/?p=17265#comment-1017420</guid>

					<description><![CDATA[There doesn&#039;t seem to be a lot of downside there. With paper, those extra questions stare back at the student, freaking them out. With the digital environment, we can unload those questions progressively, at less cost to the student&#039;s brain.]]></description>
			<content:encoded><![CDATA[<p>There doesn&#8217;t seem to be a lot of downside there. With paper, those extra questions stare back at the student, freaking them out. With the digital environment, we can unload those questions progressively, at less cost to the student&#8217;s brain.</p>
]]></content:encoded>
		
			</item>
		<item>
		<title>
		By: anonpls		</title>
		<link>/2013/teacher-data-dashboards-are-hard-pt-2/#comment-1016901</link>

		<dc:creator><![CDATA[anonpls]]></dc:creator>
		<pubDate>Fri, 20 Sep 2013 20:20:07 +0000</pubDate>
		<guid isPermaLink="false">/?p=17265#comment-1016901</guid>

					<description><![CDATA[Given what Mr. Bosma shared from his attempts at using this system, I wonder...

Do yall have the capability to just go ahead and provide students digitally with the follow up prompts you&#039;ve written for teachers to discuss with students in person?

I understand that you&#039;re trying to facilitate teacher-student interactions, but I don&#039;t think giving students the opportunity to consider those extension questions/comments on their own would detract from the potential in-person conversations.

On the other hand, if the teacher does end up with too many fires to put out dealing with tech issues, explaining directions, or whatever... at least the students would have a chance to re-consider looking at circles of a different size (or what have you).]]></description>
			<content:encoded><![CDATA[<p>Given what Mr. Bosma shared from his attempts at using this system, I wonder&#8230;</p>
<p>Do yall have the capability to just go ahead and provide students digitally with the follow up prompts you&#8217;ve written for teachers to discuss with students in person?</p>
<p>I understand that you&#8217;re trying to facilitate teacher-student interactions, but I don&#8217;t think giving students the opportunity to consider those extension questions/comments on their own would detract from the potential in-person conversations.</p>
<p>On the other hand, if the teacher does end up with too many fires to put out dealing with tech issues, explaining directions, or whatever&#8230; at least the students would have a chance to re-consider looking at circles of a different size (or what have you).</p>
]]></content:encoded>
		
			</item>
		<item>
		<title>
		By: Tim Stirrup		</title>
		<link>/2013/teacher-data-dashboards-are-hard-pt-2/#comment-1015732</link>

		<dc:creator><![CDATA[Tim Stirrup]]></dc:creator>
		<pubDate>Wed, 18 Sep 2013 19:15:25 +0000</pubDate>
		<guid isPermaLink="false">/?p=17265#comment-1015732</guid>

					<description><![CDATA[Your comment about measuring what is easy strikes a chord. At Mathspace (which is very new, you have probably not heard of it!) we give and record step on a step by step basis as students solve some problem. these tend to be the more closed question/answer than you show above. 

But the step by step approach means that students see their working out and have it checked as they go along, they know if they are on the right course. 

The dashboard for the teacher then shows progress for all students, across all the questions for an assignment. the teacher can see who has gone through with no errors, but crucially, can see exactly how each student has solved a problem (or not). in this way, the teacher can use the information to correct any misconceptions for the class or for the individual. 

The new adaptive functionality released this weekend does measure progress towards some &#039;mastery&#039; - but the analytics on this one is a work in progress. 

There&#039;s a very brief video on this here http://www.youtube.com/watch?v=MyOvRpUcrGw 

but we try not to provide too much data, just the data that is needed by the teacher.]]></description>
			<content:encoded><![CDATA[<p>Your comment about measuring what is easy strikes a chord. At Mathspace (which is very new, you have probably not heard of it!) we give and record step on a step by step basis as students solve some problem. these tend to be the more closed question/answer than you show above. </p>
<p>But the step by step approach means that students see their working out and have it checked as they go along, they know if they are on the right course. </p>
<p>The dashboard for the teacher then shows progress for all students, across all the questions for an assignment. the teacher can see who has gone through with no errors, but crucially, can see exactly how each student has solved a problem (or not). in this way, the teacher can use the information to correct any misconceptions for the class or for the individual. </p>
<p>The new adaptive functionality released this weekend does measure progress towards some &#8216;mastery&#8217; &#8211; but the analytics on this one is a work in progress. </p>
<p>There&#8217;s a very brief video on this here <a href="http://www.youtube.com/watch?v=MyOvRpUcrGw" rel="nofollow ugc">http://www.youtube.com/watch?v=MyOvRpUcrGw</a> </p>
<p>but we try not to provide too much data, just the data that is needed by the teacher.</p>
]]></content:encoded>
		
			</item>
		<item>
		<title>
		By: Pam		</title>
		<link>/2013/teacher-data-dashboards-are-hard-pt-2/#comment-1015556</link>

		<dc:creator><![CDATA[Pam]]></dc:creator>
		<pubDate>Wed, 18 Sep 2013 12:27:41 +0000</pubDate>
		<guid isPermaLink="false">/?p=17265#comment-1015556</guid>

					<description><![CDATA[Not about the dashboard specifically, but about the problem. Looking at the results through the dashboard, I noticed that a lot of my students&#039; reasons for choosing the quadratic model over the others was &quot;because it fit the points.&quot; I wonder, then, if it would be worth including a cubic model, or a generic power model y=ax^n with a slider for the exponent. Then we could really get into why a quadratic would be better than any other model. Because I bet they could make a cubic model fit, depending on the data that comes up.]]></description>
			<content:encoded><![CDATA[<p>Not about the dashboard specifically, but about the problem. Looking at the results through the dashboard, I noticed that a lot of my students&#8217; reasons for choosing the quadratic model over the others was &#8220;because it fit the points.&#8221; I wonder, then, if it would be worth including a cubic model, or a generic power model y=ax^n with a slider for the exponent. Then we could really get into why a quadratic would be better than any other model. Because I bet they could make a cubic model fit, depending on the data that comes up.</p>
]]></content:encoded>
		
			</item>
		<item>
		<title>
		By: Kevin Hall		</title>
		<link>/2013/teacher-data-dashboards-are-hard-pt-2/#comment-1013996</link>

		<dc:creator><![CDATA[Kevin Hall]]></dc:creator>
		<pubDate>Mon, 16 Sep 2013 03:52:59 +0000</pubDate>
		<guid isPermaLink="false">/?p=17265#comment-1013996</guid>

					<description><![CDATA[@Dan  Oh, I&#039;d also point out that your ideal SBG dashboard (the last picture on this post: /?p=1558 ) looks like it uses broadly applicable, but generic categories similar to the KA ones.  So I think generic categories can be good for SBG-type dashboards.  But I think activity-specific dashboards like the one for your Desmos pennies lab are what you need for in-the-heat-of-the-moment data.]]></description>
			<content:encoded><![CDATA[<p>@Dan  Oh, I&#8217;d also point out that your ideal SBG dashboard (the last picture on this post: <a href="/?p=1558" rel="ugc">/?p=1558</a> ) looks like it uses broadly applicable, but generic categories similar to the KA ones.  So I think generic categories can be good for SBG-type dashboards.  But I think activity-specific dashboards like the one for your Desmos pennies lab are what you need for in-the-heat-of-the-moment data.</p>
]]></content:encoded>
		
			</item>
		<item>
		<title>
		By: Kevin Hall		</title>
		<link>/2013/teacher-data-dashboards-are-hard-pt-2/#comment-1013994</link>

		<dc:creator><![CDATA[Kevin Hall]]></dc:creator>
		<pubDate>Mon, 16 Sep 2013 03:47:08 +0000</pubDate>
		<guid isPermaLink="false">/?p=17265#comment-1013994</guid>

					<description><![CDATA[@Dan, but remember that even with an expert teacher such as yourself using SBG, you were only 47% accurate: /?p=2877 .  I would venture to say that KA is now more accurate than you were.  And, as you say at the end of the post you linked to above (/?p=1558 ), you were putting in 58-hour weeks implementing that.  If that&#039;s considered standard operating procedure, SBG will never really take off.  

Your point about KA being on their thirteenth-ish attempt to define mastery is well taken, however.  Part of the problem was their initial reluctance, for reasons I couldn&#039;t understand, to engage with anyone from the researcher community.  I assumed it had to do with a Silicon Valley mindset that they could figure out anything on their own.  So they ended up getting all excited about training a machine-learning logistic model, when the rest of us were looking at them and thinking, um, did you just rediscover Item Response Theory?  (See the comments on this blog post: http://david-hu.com/2011/11/02/how-khan-academy-is-using-machine-learning-to-assess-student-mastery.html ) 

But I think they are past that attitude now--they seem to be more aware of the need to read and respond to the literature.  I mean, you might not agree with everything they do, but they clearly read your blog, and I think they take suggestions and feedback seriously.

I&#039;d also venture to guess that their model is now pretty good, since lots of modeling methods seem to get close to the theoretical maximum predictive accuracy, according to this article from the recent conference on Educational Data Mining: http://www.educationaldatamining.org/EDM2013/papers/rn_paper_04.pdf ).  Check out sections 4 and 5 of that paper for the main results and a summary of the problems in educational data mining that we still don&#039;t know how to solve. I won&#039;t pretend to have read or understood the methods section...]]></description>
			<content:encoded><![CDATA[<p>@Dan, but remember that even with an expert teacher such as yourself using SBG, you were only 47% accurate: <a href="/?p=2877" rel="ugc">/?p=2877</a> .  I would venture to say that KA is now more accurate than you were.  And, as you say at the end of the post you linked to above (<a href="/?p=1558" rel="ugc">/?p=1558</a> ), you were putting in 58-hour weeks implementing that.  If that&#8217;s considered standard operating procedure, SBG will never really take off.  </p>
<p>Your point about KA being on their thirteenth-ish attempt to define mastery is well taken, however.  Part of the problem was their initial reluctance, for reasons I couldn&#8217;t understand, to engage with anyone from the researcher community.  I assumed it had to do with a Silicon Valley mindset that they could figure out anything on their own.  So they ended up getting all excited about training a machine-learning logistic model, when the rest of us were looking at them and thinking, um, did you just rediscover Item Response Theory?  (See the comments on this blog post: <a href="http://david-hu.com/2011/11/02/how-khan-academy-is-using-machine-learning-to-assess-student-mastery.html" rel="nofollow ugc">http://david-hu.com/2011/11/02/how-khan-academy-is-using-machine-learning-to-assess-student-mastery.html</a> ) </p>
<p>But I think they are past that attitude now&#8211;they seem to be more aware of the need to read and respond to the literature.  I mean, you might not agree with everything they do, but they clearly read your blog, and I think they take suggestions and feedback seriously.</p>
<p>I&#8217;d also venture to guess that their model is now pretty good, since lots of modeling methods seem to get close to the theoretical maximum predictive accuracy, according to this article from the recent conference on Educational Data Mining: <a href="http://www.educationaldatamining.org/EDM2013/papers/rn_paper_04.pdf" rel="nofollow ugc">http://www.educationaldatamining.org/EDM2013/papers/rn_paper_04.pdf</a> ).  Check out sections 4 and 5 of that paper for the main results and a summary of the problems in educational data mining that we still don&#8217;t know how to solve. I won&#8217;t pretend to have read or understood the methods section&#8230;</p>
]]></content:encoded>
		
			</item>
		<item>
		<title>
		By: Kate Nowak		</title>
		<link>/2013/teacher-data-dashboards-are-hard-pt-2/#comment-1013727</link>

		<dc:creator><![CDATA[Kate Nowak]]></dc:creator>
		<pubDate>Sun, 15 Sep 2013 18:19:53 +0000</pubDate>
		<guid isPermaLink="false">/?p=17265#comment-1013727</guid>

					<description><![CDATA[Never underestimate the time-suckage of frozen browsers and students-not-reading. Also, #first #gurrrlllll.]]></description>
			<content:encoded><![CDATA[<p>Never underestimate the time-suckage of frozen browsers and students-not-reading. Also, #first #gurrrlllll.</p>
]]></content:encoded>
		
			</item>
		<item>
		<title>
		By: Ben		</title>
		<link>/2013/teacher-data-dashboards-are-hard-pt-2/#comment-1013720</link>

		<dc:creator><![CDATA[Ben]]></dc:creator>
		<pubDate>Sun, 15 Sep 2013 18:11:05 +0000</pubDate>
		<guid isPermaLink="false">/?p=17265#comment-1013720</guid>

					<description><![CDATA[The Discovery Assessment platform does a decent job of providing a respectable response to your first and third points. While I don&#039;t have a screenshot to offer (although I will have many in about a month), it tries to break student data down into relatively simple color coded indicators of which skills students are deficient at. While it can provide all of the minutia like time spent on problem, most teachers using the data prefer to stick with the simple indicators of which skills students need work on. I haven&#039;t personally used it in a classroom, but the teachers in my district that have all used it enjoy the connections that it makes to resources inside of Discovery Education&#039;s website, and how easy it make it to assign certain tasks to the students.

As this now sounds like an advertisement for Discovery Education, I&#039;ve also had many teachers just take the data and use it with existing interventions (non Discovery-provided) and teaching strategies within small groups in their classrooms.]]></description>
			<content:encoded><![CDATA[<p>The Discovery Assessment platform does a decent job of providing a respectable response to your first and third points. While I don&#8217;t have a screenshot to offer (although I will have many in about a month), it tries to break student data down into relatively simple color coded indicators of which skills students are deficient at. While it can provide all of the minutia like time spent on problem, most teachers using the data prefer to stick with the simple indicators of which skills students need work on. I haven&#8217;t personally used it in a classroom, but the teachers in my district that have all used it enjoy the connections that it makes to resources inside of Discovery Education&#8217;s website, and how easy it make it to assign certain tasks to the students.</p>
<p>As this now sounds like an advertisement for Discovery Education, I&#8217;ve also had many teachers just take the data and use it with existing interventions (non Discovery-provided) and teaching strategies within small groups in their classrooms.</p>
]]></content:encoded>
		
			</item>
		<item>
		<title>
		By: Dan Meyer		</title>
		<link>/2013/teacher-data-dashboards-are-hard-pt-2/#comment-1013711</link>

		<dc:creator><![CDATA[Dan Meyer]]></dc:creator>
		<pubDate>Sun, 15 Sep 2013 17:51:35 +0000</pubDate>
		<guid isPermaLink="false">/?p=17265#comment-1013711</guid>

					<description><![CDATA[Thanks for all the commentary team. A couple of follow-up questions here, a couple of follow-up comments there, and then a couple instances where I just want to throw two comments against each other and watch the sparks:

For instance, here&#039;s &lt;strong&gt;Tom Woodward&lt;/strong&gt;:

&lt;blockquote&gt;The other piece I worry about is the relatively unattainable nature of some of the skills needed for building interesting/useful digital content for most teachers. I really want to provision content for teachers and then be able to give them access to changing/building their own content. While many are happy consuming what’s given, there are people who will want to make it their own or it will spark new ideas. I hate the idea that the next step would be out of reach of most of that subset.&lt;/blockquote&gt;

And earlier there&#039;s &lt;strong&gt;Eric Scholz&lt;/strong&gt; looking for &lt;em&gt;exactly&lt;/em&gt; that kind of customization:

&lt;blockquote&gt;I would add a “bank” of variables at the top of the page that teachers from when building their lesson plan the night before. This would allow for a variety of objectives for the lesson.&lt;/blockquote&gt;

&lt;strong&gt;Bob Lochel&lt;/strong&gt;, being useful:

&lt;blockquote&gt;While many adaptive systems propose to help students along the way, they are often mis-interpreted as summative assessments, through their similarities to traditional grading terms and mechanisms.&lt;/blockquote&gt;

&lt;strong&gt;Tom Woodward&lt;/strong&gt;, being useful:

&lt;blockquote&gt;There could/should be some value to a dashboard that guides formative synchronous action but it’d have to be really low on cognitive demand.&lt;/blockquote&gt;

&lt;strong&gt;Dave&lt;/strong&gt; really throws down the gauntlet with his product roadmap. I tend to think most of the data he&#039;d like a tablet to supply is way, way outside the capabilities of most adaptive systems. (The really interesting stuff anyway —Â #1 and #2.) Regardless, I&#039;ll keep that roadmap in the back of my head. Give us all a few decades, okay.

&lt;strong&gt;Kevin Hall:&lt;/strong&gt;

&lt;blockquote&gt;But would it really help if the interface said of student performance on solving equations, “Lucas and Michelle get these wrong because they combine like terms on opposite sides of the equals sign; Marcus and Pamela get them wrong because they often subtract something to the left side twice and do nothing to the right side; Angela and Brian…” The way I read your post, the limitation is that teachers don’t have time to respond in class to all these individual needs anyways.&lt;/blockquote&gt;

Assuming the computer&#039;s judgment was correct 90% of the time? Yeah, it would really help. There&#039;s the data a teacher can use in class, which is a subset of the data a teacher can use in general. Knowing why Marcus and Pamela are struggling to solve equations would definitely help with lesson planning, lunchtime remediation, etc., even if a lot of teachers would struggle to put that information to work immediately.

&lt;strong&gt;Bill Fitzgerald&lt;/strong&gt;:

&lt;blockquote&gt;I’ve come to the perspective that dashboards are really an focused search tool — rather than telling you what you need to know, a good dash will let you ask questions about the information it contains. Some of this is text-based search, and some of this is filtered (based on facets that are either structured into the apps architecture, or inferred from student responses) — but in any case, a good dash will reflect the assumption that success can look different for different students.&lt;/blockquote&gt;

Any examples you can share?

&lt;strong&gt;Kevin Hall&lt;/strong&gt;:

&lt;blockquote&gt;@Dan, curious you think what a dashboard would like to assist teachers with implementing SBG. Or do you think no dashboard would work for that?&lt;/blockquote&gt;

Something like the last image on &lt;a href=&quot;/?p=1558&quot; rel=&quot;nofollow&quot;&gt;this page&lt;/a&gt; here.

Okay, j/k sort of. But SBG information is the archetypal not-useful-for-the-teacher-in-the-heat-of-the-moment kind of information. Riley Lark created a nice SBG system called ActiveGrade (since acquired by someone). It just managed and displayed the information. What I liked most about it was that the teacher had to make the judgment that a student had achieved mastery, rather than relying on Khan&#039;s thirteenth-ish attempt at defining mastery algorithmically.]]></description>
			<content:encoded><![CDATA[<p>Thanks for all the commentary team. A couple of follow-up questions here, a couple of follow-up comments there, and then a couple instances where I just want to throw two comments against each other and watch the sparks:</p>
<p>For instance, here&#8217;s <strong>Tom Woodward</strong>:</p>
<blockquote><p>The other piece I worry about is the relatively unattainable nature of some of the skills needed for building interesting/useful digital content for most teachers. I really want to provision content for teachers and then be able to give them access to changing/building their own content. While many are happy consuming what’s given, there are people who will want to make it their own or it will spark new ideas. I hate the idea that the next step would be out of reach of most of that subset.</p></blockquote>
<p>And earlier there&#8217;s <strong>Eric Scholz</strong> looking for <em>exactly</em> that kind of customization:</p>
<blockquote><p>I would add a “bank” of variables at the top of the page that teachers from when building their lesson plan the night before. This would allow for a variety of objectives for the lesson.</p></blockquote>
<p><strong>Bob Lochel</strong>, being useful:</p>
<blockquote><p>While many adaptive systems propose to help students along the way, they are often mis-interpreted as summative assessments, through their similarities to traditional grading terms and mechanisms.</p></blockquote>
<p><strong>Tom Woodward</strong>, being useful:</p>
<blockquote><p>There could/should be some value to a dashboard that guides formative synchronous action but it’d have to be really low on cognitive demand.</p></blockquote>
<p><strong>Dave</strong> really throws down the gauntlet with his product roadmap. I tend to think most of the data he&#8217;d like a tablet to supply is way, way outside the capabilities of most adaptive systems. (The really interesting stuff anyway —Â #1 and #2.) Regardless, I&#8217;ll keep that roadmap in the back of my head. Give us all a few decades, okay.</p>
<p><strong>Kevin Hall:</strong></p>
<blockquote><p>But would it really help if the interface said of student performance on solving equations, “Lucas and Michelle get these wrong because they combine like terms on opposite sides of the equals sign; Marcus and Pamela get them wrong because they often subtract something to the left side twice and do nothing to the right side; Angela and Brian…” The way I read your post, the limitation is that teachers don’t have time to respond in class to all these individual needs anyways.</p></blockquote>
<p>Assuming the computer&#8217;s judgment was correct 90% of the time? Yeah, it would really help. There&#8217;s the data a teacher can use in class, which is a subset of the data a teacher can use in general. Knowing why Marcus and Pamela are struggling to solve equations would definitely help with lesson planning, lunchtime remediation, etc., even if a lot of teachers would struggle to put that information to work immediately.</p>
<p><strong>Bill Fitzgerald</strong>:</p>
<blockquote><p>I’ve come to the perspective that dashboards are really an focused search tool — rather than telling you what you need to know, a good dash will let you ask questions about the information it contains. Some of this is text-based search, and some of this is filtered (based on facets that are either structured into the apps architecture, or inferred from student responses) — but in any case, a good dash will reflect the assumption that success can look different for different students.</p></blockquote>
<p>Any examples you can share?</p>
<p><strong>Kevin Hall</strong>:</p>
<blockquote><p>@Dan, curious you think what a dashboard would like to assist teachers with implementing SBG. Or do you think no dashboard would work for that?</p></blockquote>
<p>Something like the last image on <a href="/?p=1558" rel="nofollow">this page</a> here.</p>
<p>Okay, j/k sort of. But SBG information is the archetypal not-useful-for-the-teacher-in-the-heat-of-the-moment kind of information. Riley Lark created a nice SBG system called ActiveGrade (since acquired by someone). It just managed and displayed the information. What I liked most about it was that the teacher had to make the judgment that a student had achieved mastery, rather than relying on Khan&#8217;s thirteenth-ish attempt at defining mastery algorithmically.</p>
]]></content:encoded>
		
			</item>
	</channel>
</rss>
