iAspire Reflect

Using Videos During Teacher Observations Part 4 – Introducing iAspire Reflect

In our previous three posts, we have discussed my own personal struggles with the teacher observation process, creating an environment where teachers self reflect, and sharing a wonderful resource called the Best Foot Forward Project from the Center for Education Policy Research at Harvard University.    Properly evaluating teachers where a focus is on concrete evidence (video) can be a difficult transition for many schools and organizations.  Why?  Mostly it’s because it is not the norm in education and/or is not what they currently do.

To me, all decisions come down to purpose.  In the words of Simon Sinek in his wonderful book Start with Why: “For great leaders, The Golden Circle is in balance. They are in pursuit of WHY, they hold themselves accountable to HOW they do it, and WHAT they do serves as the tangible proof of what they believe.”  If you haven’t heard about Simon Sinek’s Golden Circle, below is a 3:40 clip of him explaining it:

So what exactly is iAspire Reflect?  iAspire Reflect is a simple-to-use software that allows educators to quickly upload video, add tags to the video (questioning, lesson objective, etc), share videos with colleagues, and search using a variety of criteria. iAspire Reflect also allows you to create a video library of the very best teaching in your organization.

The why behind  iAspire Reflect is to help create an environment where self-reflection and observations based on concrete evidence becomes the norm.  I think about the professional athletes of the world and how often they watch game film.  Being from Indiana (although a Chicago Bears Fan…), I have to make a reference to Peyton Manning, arguably one of the best football players of all time.  It was not Peyton’s elite athleticism that separated him from other quarterbacks. It was his ability and initiative to prepare for opponents that set him apart.  Peyton spent countless hours watching game film.  He reviewed each practice and game from multiple angles, identifying what the opponents were doing, their tendencies, and determining a plan based on what he saw.  He didn’t rely solely on his memory or what his coach told him to do.  Instead, he took complete and total ownership of the entire process and grounded his decisions in concrete evidence – what the “tape” showed him.  Here is an article from the New York Post on Peyton’s video prowess.  He was a machine when it came to preparation and watching video.

In an education environment, video observations allow the teacher and/or other educator to watch a clip, rewind, and watch again.  What specific behaviors were most effective for the teacher, and how do you know?  What exactly did the students do and say as a result of the teacher actions?  This is where recording and watching a lesson becomes extremely powerful.

Another why behind iAspire Reflect is to help alleviate some of the struggles that I faced when observing teachers.  No need to rehash all the struggles – you can read them again here.  Without something concrete, I would not be able to provide the specific actions or dialogue for everything that happened, the teacher would be basing his/her reflections on what he/she remembered, and my own filters would be a barrier to what I was able to capture and document.

As we have been developing iAspire Reflect for teacher observations using video, we have had the privilege to speak with schools across the country to gain their input on our development.  When we discussed our idea for iAspire Reflect, about 95% have been very intrigued and excited by the possibility.  In fact, most of the final responses from these conversations sounded a lot like this: “Can I try it?” or “How can I get started with this?”.


To learn more, please visit


Is Your Intervention Cheating on You?

So, how many of you snickered the first time you heard the term “Fidelity of Implementation” at a workshop or conference?  The first time I heard it I thought I had entered into a binding nuptial at school.  Many of you have probably heard it go by several different names:  Fidelity of Implementation (FOI), Treatment Integrity (TI), Procedural Reliability (PR), etc. A general definition for FOI is the following: “delivering an intervention or treatment the way it is intended or prescribed with accuracy and consistency”.  This would include the technical aspects of delivery as indicated by the publisher or research study, and it would also include temporal aspects related to frequency (how often?) and duration (how long?).

Regardless of what you call it, though, it is often an under-appreciated aspect of the RTI process.  The truth is, if most of us were to track the percentage of time devoted to FOI, we would likely find it lacking in terms of time allocation at RTI meetings.  Many RTI teams spend a good portion of their time analyzing the problem, creating goals, picking progress monitoring tools, and devising interventions; however, how much time is spent discussing/tracking FOI?  The bottom line is we cannot attribute student outcome data to specific interventions unless we measure the extent to which the intervention plan was implemented.

So what do you think?  Check out the list below to see if your interventions are in trouble!

Top 5 ways to tell if your intervention is cheating:

5.  Merely mentioning the term “Intervention Integrity Check” elicits high levels of anxiety for all those involved.
4.  The intervention just isn’t keeping the same schedule it purports to keep (it is keeping weird hours).
3.  The implementation enthusiasm just isn’t there anymore.
2.  The intervention results are just too good to be true.
1.  Your intervention needs counseling.

Will you “renew your vow” to measure or track the FOI of your interventions? Have your interventions been cheating on you?  

~Jason Cochran

How Do You Measure Teacher Effectiveness?

As educators, we have a lot on our plates.  Somewhere among best practice, student relationships, responsive instruction, 21st century skills, and our own families/personal lives we need to find time to reflect.  What is going well for you and your teachers right now?  What is not?  How effective are your teachers?  And probably most importantly, how do you know?

There are a myriad of ways to measure teacher effectiveness.  Rubrics and walkthroughs are the most common ways, but some schools and districts have also included student assessments into the teacher evaluation mix.  Personally, I am a huge fan of the word ‘balance’.  Relying on any one tool to measure all aspects of teacher effectiveness is short-sighted and may result in faulty data or incorrect conclusions being drawn.  Instead, looking at several data points can increase the credibility and validity of data.  This is the concept of data triangulation.

When thinking about data triangulation, consider multiple data points collected in various methods (qualitative and quantitative) and sources, such as:

  • Rubrics: rubrics are powerful and communicate to teachers their performance based on a continuum of effectiveness.  Typically 3 – 5 different descriptions of teacher performance are described with specific teacher actions in each rating category.  Ratings could range from ineffective to highly effective, basic to distinguished, and other categories.  Observers can give very specific feedback to teachers when using rubrics.  Rubrics are typically used for longer observations and final teacher ratings (when applicable) but can also be used for formative feedback throughout the school year.
  • Walkthroughs: walkthroughs are typically meant to be used during shorter classroom visits.  Walkthroughs have the ability to collect big-picture data to determine trends among things like teachers, grade, subjects, buildings, and observers.  Walkthroughs usually have a specific list of possible selectable options in various categories, such as student engagement, types of questions, or instructional strategies being use by the teacher.  Walkthroughs are highly customizable and provide flexibility for organizations to collect data that matters most to them.
  • Short observations: short observations can be very simple in design and are typically used for narrative observations and feedback.  This narrative feedback communicates to teachers what was observed and the observers’ own reflections after the lesson.  Within the feedback could be questions, comments, commendations, and recommendations.
  • Student assessment data: Ultimately the goal of education is for students to do well when taking some form of an assessment.  The assessment could be a performance, creation, reflection, discussion, test/quiz, state assessment, portfolio, etc.  When the assessment itself is valued and reliable, the student data can be a very important part of the teacher effectiveness equation. On the other end, though, if the assessment itself is not valued (think state assessments…) or reliable, perhaps it’s better to consider other assessments for teacher evaluation data points.

There are a few other data points schools could consider when thinking about teacher effectiveness: parent input, student input, and peer input.  All must be carefully considered before deciding to include within your teacher evaluation program.

This leads me to a few questions: what’s best for you and your unique situation?  What information do you want to be able to measure and track?  How do you know that your teachers are performing well or not well in certain areas?  Each school/district/organization will have specific indicators of success based on the the unique community the organization serves.  There is no one-size-fits-all approach that will work for all organizations.  Instead, consider what your collective goals are, what your vision of success means, and align your teacher evaluation program to your goals.

How do you measure the effectiveness of your teachers?