You’re Invited to Learn About the Use of Improvement Science and Networked Improvement Communities in Education
For the past five years, the Carnegie Foundation for the Advancement of Teaching has been pioneering a fundamentally new vision for the research and development enterprise in education. We seek to join together the discipline of improvement science with the capabilities of networks specifically designed to foster innovation and social learning. This approach is embodied in what Carnegie refers to as Networked Improvement Communities (NIC). These NICs are scientific learning communities distinguished by four essential characteristics:
- focused on a well specified common aim,
- guided by a deep understanding of the problem and the system that produces it,
- disciplined by the rigor of improvement research, and
- networked together to accelerate the development and testing of possible improvements, their more rapid diffusion, and effective integration into the highly varied contexts that is education in America.
While there remains much more to be learned about networked improvement communities in education, evidence is beginning to accumulate regarding their benefits for development, implementation, and improvement. Carnegie’s experience to date with NICs has not only demonstrated this to be true, it has also provided evidence of significant impact on longstanding and seemingly intractable problems (most notably, for example, the problem of inordinately high failure rates of community college students in developmental mathematics).
The Explorer’s Workshop offers a first engagement with the ideas of improvement science pursued in the context of NICs. Lasting two days, it includes a “boot camp” introduction to improvement science, including problem definition and specification, systems analysis, measurement and analytics, change theory, and improvement research as well as the principles behind the organization, initiation, and support of NICs. It s intended for those interested in but not deeply knowledgeable about this work.
Who should attend?
The Workshop is designed to support teams from projects considering adoption of a NIC approach as well as individuals wishing to learn more about this strategy for accelerating learning in and through the practice to improve.
The next Workshop will be at the Carnegie Foundation in Stanford, California, March 7 – 8, 2013. The cost for the workshop is $1500, with some meals and a reception included.
Registration will begin mid-January, but to “pre-register” or for more information, email Gay Clyburn at firstname.lastname@example.org.
The Futures of School Reform, edited by Jal Mehta, Robert B. Schwartz, and Frederick M. Hess, is a collection of essays that try to push the boundaries of current education reform efforts in order to generate dramatic change through high leverage issues. In the second of the essays, “Building on Practical Knowledge,” Carnegie President Anthony Bryk, Senior Fellow Louis Gomez, and Mehta, who is at Harvard Graduate School of Education, advocate an approach to professionalizing the field of education by using the principles of a Networked Improvement Community. The authors argue that teaching is not a professionalized field, in which knowledge is shared and best practices are developed. The hierarchical and bureaucratic state of schools, districts, and state and federal governments results in the disregard of local learning and adaptation when creating standardization across schools and implementing policies. There is no formal system to develop teachers’ practical knowledge, training, or apprenticeship, resulting in a wide variation in teacher performance.
In addition, the authors write that traditional approaches to educational research—translational research and action research—are often divorced from practice and have failed to help further the improvement of teaching and student learning. Translational research uses innovation as a stage-wise linear inquiry process and generalizes solutions well, but fails to accommodate local insights. Action research, on the other hand, flows from practice and is improvement-based, but puts a low priority on generalizable mapping of cause and effect, resulting in an inability to scale successful changes.
The authors discuss a “third way”: a Networked Improvement Community (NIC), which is the Carnegie approach to problems of practice that Bryk and colleagues are attempting to integrate into the field of education research. Through the use of common targets, a shared language, and common protocols for inquiry, this “third way” uses improvement science to consider local contexts and networks to help generalize and scale solutions. Thus NICs can help contribute to professionalizing teaching by framing the profession around practice improvement, engendering routines that enable inquiry, and helping teachers feel part of a broader profession. NICs create a system to support a knowledge profession through the use of human capital building centered on practice, allowing states and districts to focus on providing an infrastructure for educational improvement, and creating policy that allows for a “greenfield” for social learning. Doing this will help improve performance, learning, and equity in our schools, the authors note.
In order to get there, the authors believe every actor has a role to play. School leaders should think of their institutions as areas that can learn, unions should free up teachers and schools to take responsibility, governmental actors need to move away from a focus on control and compliance and towards support and learning, and institutions need to act as focal points for large collective action problems. They conclude, “If all actors, throughout the system, began to conceive their jobs as transforming an Industrial Age compliance structure into a profession of competent, skilled, and continuously learning practitioners, collectively we might finally be able to move our education system into the twenty-first century” (64).
The chapter helps highlight the motivations and results of a well-functioning NIC. Carnegie’s two NIC-supported programs, the Community College Pathways and Building a Teaching Effectiveness Network, can not only directly help students become more successful, they can also help professionalize the teaching profession, thus accelerating improvement. Through the use of NICs, the authors write that the field can take advantage of promising education reforms with the input of practitioners who, when working together, can help scale successful changes that will generate dramatic change.
The Carnegie Foundation for the Advancement of Teaching is launching the Carnegie Alpha Lab Research Network to engage academic researchers from diverse fields to assist the Foundation in its mathematics Pathways initiative.
The Pathways – Statway™ and Quantway™— address the problem of the high failure rate of community college students in developmental mathematics. The goal is to dramatically increase from 5 percent to 50 percent the percentage of students to achieve transferable college math credit within one year of continuous enrollment.
The Alpha Lab Network is a National Science Foundation funded project that aims to coordinate the efforts of researchers interested in leveraging their own research expertise to improve the Carnegie Pathways. In addition, the Network will support pre-doctoral students who, in collaboration with their mentors, will engage in early-stage research as part of the Network.
In contrast to the traditional approach to educational research, researchers in the Carnegie Alpha Lab Network will work within priorities set by a networked improvement community working on the same problem of practice.
Researchers will work on two types of projects: ones designed to deepen Carnegie’s understanding of the problem, both theoretical and empirical; and projects designed to develop and test theory-based solutions to network challenges.
The Network is headquartered at the University of California, Los Angeles under the direction of Jim Stigler and Karen Givvin.
For more information on the Carnegie Alpha Lab Research Network or to join the mailing list, visit www.carnegiealphalabs.org.
Why are we at Carnegie interested in improvement research? What does the work of the Institute for Healthcare Improvement (IHI) have to do with education?
The answer to these questions is related. In both sectors, there is a gap between what is known and what happens daily in practice. Both sectors are made up of a dedicated workforce whose best efforts do not consistently add up to improvement. And both healthcare and education face the challenge of effectively and efficiently affecting improvement at scale. Improvement research holds promise for addressing these challenges and IHI has decades of experience of using these methodologies to foster change. We knew we could learn from them.
Improvement research is based on simple but powerful questions, coined as the Model for Improvement by Associates in Process Improvement (API): (1) What are we trying to accomplish? (2) How will we know that a change is an improvement? (3) What changes can we make that will result in an improvement? Together these questions structure an active and disciplined way of pursuing change. As we begin to apply improvement research to education, we have found it useful to begin conversations around improvement using a fourth question: (4) How do we understand the problems and systems in which they’re embedded? We have a tendency in education to jump to solutions and not think deeply about the problems we are trying to solve. A more productive approach starts with a problem and taking a careful look across the system to better understand the causes that influence current outcomes.
It is these four improvement research questions that have structured the strand of our Statway™ and Quantway™ Community College Pathways program of work that we have come to call Productive Persistence. Since we took on the problem of the extraordinarily high failure rates of community college students in developmental math, we have known that we could not get movement on the kinds of outcomes we were looking for by changing the curriculum or course structure alone. There was a common notion that it was important to attend to what can be referred to as student success factors, student motivation and engagement or non-cognitive factors. There was also a lot of activity in this area and many innovations to draw on. Lack of innovation was certainly not the problem.
Many financial and human resources are already dedicated to student success activity in community colleges. Community colleges offer students a variety and mixture of initiatives and services designed to help them succeed in college, some of them quite innovative. But if you walk from one institution to another, there is very little agreement as to what makes a good student success program. And there is a weak evidence base suggesting that these efforts are accumulating into real improvements in the college lives of students. We also know that there are a lot of exciting new research theories—particularly from social psychology—about specific practices that could be powerful levers of change. However, it is not really clear how these theories would be made to work in practice, specifically applied to developmental math and with community college students. A lot of exciting ideas, but the translation in how to make them work, reliably in real contexts is not there.
As we tried to structure this strand of work into the Pathways, we experienced a time of flailing at Carnegie as well. We knew we needed to work on it, we had people assigned to the task, everybody believed it was important, but from conversation to conversation no one could really tell you the same thing about what we were doing or what specifically we were trying to accomplish. To focus the work and halt the flailing, we launched a 90-day cycle in the fall of 2010. A 90-day cycle is an improvement research tool developed by IHI to accomplish deep-dive, quick turn-around research.
We began this R&D process to build a theory of change and a measurement model to go along with it. We were attempting to answer two of the improvement questions for this strand of work: what specifically are we trying to accomplish and how will we know if a change is an improvement? We put together a team with the relevant expertise in social psychology, improvement research and the on-the ground experience supporting developmental math students. We scanned the field, talked to many people that understood the problem from different angles and identified five areas that were most important to focus on to get to the outcome that we cared about. We “tested” these drivers with a diverse set of experts and built a measurement model that would enable us to refine this theory over time.
One of the unique things about improvement science that separates it from other education research approaches is that it is not about being comprehensive. The goal is not to develop a conceptual framework that tries to organize every possible influence and include everything we could work on. Instead, we asked what are the big drivers for improvement? And what measurement will we need to learn from our efforts at change and to improve our theory over time? Since this initial 90-day cycle, the Productive Persistence team has refined our measurement model to make it more practical and embedded in the daily lives of the community college students with minimal interruption. They have collected these measures in our networks and convened additional experts, improving the theory over time. And they have started to develop and test changes, focusing on the critical first three weeks of the course.
In the process, we have become increasingly convinced that improvement methodologies hold promise for productively integrating diverse kinds of expertise to solve important problems. We often talk about notions of bridging research and practice. Normally we mean just that, building a thin thing between two land masses that stay firmly planted. Research stays firmly on one side of a line, practice stays firmly on the other and we have a tiny space in which they talk to each other. Improvement research brings these two sides together in a collective process aimed at solving concrete problems of practice. It pairs action with discipline, moving some people into action more quickly than they are comfortable and requiring others to be a little more patient and disciplined. It also carries with it the excitement of bringing ideas into action, helping our best efforts lead to visible improvements in the lives of students.
This post was adapted from a presentation to the Executive Committee of the Carnegie Board of Trustees.
Joshua Glazer visited Carnegie recently to talk about ideas outlined in an article, “Reconsidering Replication: New Perspectives on Large-Scale School Improvement,” that was published in the Journal of Educational Change. Glazer is with The Rothschild Foundation in Jerusalem and his co-author Donald Peurach is with the School of Education at the University of Michigan. In the article, the authors focus on a change strategy that features a “hub” organization collaborating with “outlet” schools to enact school-wide designs for improvement.
The ideas in the article and the presentation seemed synergistic with Carnegie’s work in Networked Improvement Communities. Both the authors’ approach to school improvement and Carnegie’s work in transforming developmental mathematics in community colleges align around networks as learning organizations supported by central hubs.
One key difference between the Glazer/Peurach model and Carnegie’s work is the social organization illustrated by the dissimilar roles of the hubs in each. In Glazer’s model, the hub leads the implementation network, its role being to build knowledge outside of individual network sites and devise a model to be enacted by the “outlet” schools, and to monitor the fidelity of implementation. This model is hub-centric. In Carnegie’s model, the hub is “first among equals,” tending to the health and well being of the network and supporting integrity of implementation of powerful ideas. It delineates an initial problem; recruits leaders and champions into the work; establishes rules, roles, and responsibilities for participation; creates an initial conceptual framework; and offers an analytic and technical infrastructure for the work. Carnegie’s hub is less an enforcer and more a gardener.
The key similarity in the two approaches is that the hub in both configurations is a capable organization that sits in the middle to make the whole enterprise run. Both Carnegie’s and Glazer’s network hubs study the innovations going on in networks. Both models see schools/colleges as learning and problem-solving organizations that analyze and address problems of practice. In both, the hub vets the knowledge, and then uses that knowledge to improve the design, supporting the constant improvement and iteration in an evidence-based process of change. And in both models, every school learns on the basis of the wisdom of many others doing the same kinds of work.
The bottom line for both approaches is that they exist to solve the problem that, despite the growth of improvement efforts, we continue to repeat the tragedy of local innovation “dying on the vine.” What is clearly needed and what Carnegie and Glazer and colleagues are developing and promoting is an infrastructure that allows us to cull and synthesize the best of what we know from scholarship and practice, rapidly develop and test prospective improvements, deploy what we learn about what works in schools and classrooms, and add to our knowledge to continuously improve the performance of the system.
For many years, educational researchers have worked with program designers and implementers in pursuit of what has been called fidelity of implementation. Simply put, this has involved the application of numerous tools and procedures designed to ensure that implementers replicate programs exactly as they were designed and intended.
There is a simple and immutable logic to this urge to strictly control implementation. It is compelled by the methodologies (and their attendant mindset) that warrant programs as effective. The manner in which we validate “what works” in education involves research methodologies that privilege explanatory power as their primary purpose. To arrive at valid explanations, the methodologies necessarily abstract problems and programmatic solutions from their contexts.
The result is an empirical warrant, offered in the form of significant effect size indices, that represent impacts that just weren’t actually observed anywhere. What they are is the average effect across the many individual implementations subsumed within the research. The analysis that informs our sense of “what works” does not measure anything that actually happened anywhere. Neither does it represent a measure of impact that is likely to be realized in any subsequent implementation.
But what typical R&D thinking does is compel us to pursue fidelity of implementation. The traditional empirical warrant justifying a program as effective holds only in so far as the program is replicated exactly as tested. And so, exact reproduction is made necessary – despite the fact that it is not likely to produce the same effects or even be possible to accomplish.
The issue is that problems simply don’t exist and programmatic solutions simply cannot be implemented outside of their contexts. The real challenge of implementation, then, is to figure out how to thoughtfully accommodate local contexts while remaining true to the core ideas to ensure improvements in practice that carry the warrant of effectiveness.
What we need is less fidelity of implementation (do exactly what they say to do) and more integrity of implementation (do what matters most and works best while accommodating local needs and circumstances). This idea of integrity in implementation allows for programmatic expression in a manner that remains true to essential empirically-warranted ideas while being responsive to varied conditions and contexts.
What does it take to achieve integrity in implementation? The answers permeate all of the current work of the Carnegie Foundation, including the reconceptualization of the education research enterprise such that it better addresses real problems of practice and generates knowledge that genuinely improves practice (i.e., the application of improvement science and improvement research) and the redefinition of the human organization that pursues education R&D (the formation of Networked Improvement Communities that continuously test their practice to ensure that proposed changes are, in fact, improvements).
Just how this is done is being explored and documented in the Foundation’s work. While much more needs to be learned, tested, and shared, some focal areas are clearly emerging as requiring serious and thoughtful attention if implementation with integrity is to be realized. The first of these is in the nature of programmatic design (and even the design process itself). The second is in the manner in which implementation is pursued.
Simply put, when we design for implementation with integrity, we design differently – both the process and the characteristics of the resulting programs. While much will be elaborated in subsequent postings here, some considerations include:
- identify goals as measurable aims;
- develop a comprehensive and public articulation of the problem and the system that produces it;
- guide development with clearly articulated design principles, including essential characteristics that are definitional to the solution;
- create generative structures that accommodate integrative adaptations while enforcing essential characteristics;
- identify/encourage/embrace/but test variants;
- enter into authentic partnerships (NICs) to promote fidelity:
- Common goals
- Shared values
- Shared power
- Real problems to solve; and
- discipline the implementation effort with a commonly held measurement model that ensures accomplishment and with the rigor of improvement research to test local adaptations for validation as improvements.
Each entry in the list above is a topic worthy of extensive elaboration. In many cases there are methodologies, tools, and processes that address each. The Foundation is currently learning how to use these tools effectively by working with practitioner scholars in Networked Improvement Communities to address real and pressing problems of practice. The knowledge that we acquire will be shared broadly so that all who are interested can learn along with us how a science of improvement supporting the work of these communities can make integrity of implementation a reality.
TED’s Chris Anderson says the rise of web video is driving a worldwide phenomenon he calls Crowd Accelerated Innovation — a self-fueling cycle of learning that could be as significant as the invention of print. But to tap into its power, organizations will need to embrace radical openness.
A concrete way to learn how a Networked Improvement Community (NIC) might organize and carry out a better program of educational R&D is to build one. In this spirit, the Carnegie Foundation in partnership with several other colleagues and institutions, is now initiating a prototype NIC aimed at addressing the extraordinary failure rates in developmental mathematics in community colleges.
The aim of this NIC is to double the proportion of students who in a one-year course sequence achieve college credit and are mathematically prepared to succeed in subsequent academic pursuits. Our first effort in this regard is to launch the Carnegie Statway Network. This network is redesigning traditional developmental mathematics by creating a one-year pathway to and through statistics that integrates necessary mathematics learning along the way.
Carnegie President Anthony Bryk and colleagues delved into this work during a recent presentation at the annual meeting of the American Educational Research Association. He emphasized the Foundation’s commitment to an approach to educational research and development that joins practitioners, researchers and developers in purposeful collective action to address a problem of practice, in this case developmental math. Bryk said this network organizational approach can surface and test new insights and enable more fluid exchanges across contexts and traditional institutional boundaries—thus holding potential to enhance designing for scale.
“We are committed to principles of openness and transparency,” he said. “Openness of all of the resources we are building and drawing on. Transparency in sharing what we are doing, why we are doing it and what we are learning along the way—both successes and failures.”
Carnegie is drawing on Englebart’s 1992 work on high performing learning organizations, where networked improvement communities organize and apply diverse expertise to solve complex problems. Englebart’s Multilevel Model for Learning for Improvement characterizes the work of organizations in terms of three broad domains of activity. For Carnegie, A-level work is the front-line teaching and learning work of classrooms. B-level activity describes within-organization efforts that are designed to improve the on-the-ground work (like the work of institutional research units in community college) and C-level activity is inter-institutional engagement in concurrent development. This model affords mechanisms for testing the validity of local knowledge and adjusting local understanding of the true nature of a problem.
Specifically, the Carnegie network involves the community college faculty in participating institutions who teach and implement Statway and other math pathways along with Carnegie’s improvement specialists and researchers. Together, they test changes with hypothesized benefits, warrant those changes with empirical evidence, provide for local adaptations, and over time contribute to the modification of the pathway. The NIC also includes deans, institutional researchers and others who address the institutional requirements; thinking partners who are those individuals with technical and substantive expertise; Carnegie staff provide ongoing technical, analytic and organizational support as a hub for the network.
NICs engage in disciplined inquiry. These inquiries are organized around the four core questions of improvement science—Carnegie’s approach to R&D: What are we trying to accomplish? How do we understand the problem and system in which they are embedded? What change might we introduce? How will we know that the changes are improvements?
Measurement is vital. Anchoring the NIC around a common core of interventions, participants conduct multiple small tests of change, also known as rapid prototyping. As a professional community, we study the impact of those changes, learn from them and adjust as needed. We are paying close attention to variability in performance and the multiple factors that may contribute to it. For example, we expect that Statway effects will vary depending on specific characteristics of students, faculty and the contexts in which they both work. Given that, instead of asking whether an intervention works (e.g., “Is A better than B?” “Is C better than nothing?”), in the NIC, we ask, “what works, when, for whom and under what conditions.” It is not good enough to know that Statway can be made to work in a few places—the point of an improvement oriented approach to education R & D is to achieve effective implementation across local contexts, reliably and at scale.
The design of this work is practical and nimble and adapted from practices pioneered by the Institute for Healthcare Improvement. The basic idea is straightforward: establish baseline results, intervene, measure outcomes, keep doing it. It’s called a PDSA cycle — Plan, Do, Study, Act. For example, in the study phase, measurement is conducted in the web of daily activity: a 60 second student survey (simply asking how they’re doing); a three-minute teacher report (asking how the lesson went and what they might change and why); or cull informal queries and comments. The idea is to test fast, fail fast and early, learn and improve.
With 30 colleges involved and an as yet unknown number of students, we are looking at collecting data from multiple data streams and will feed this continuous stream of information to quick problem solving teams with one week revision and testing targets. With a year of lessons and different start dates among our college network, we aim to exploit and study the natural variation in the outcomes of implementations and feed that knowledge back into the design to increase its effectiveness in real time. The bottom line is to learn how to put usable knowledge to work as part of the design / development process to support increasing student success.
The big question, of course, is can we initiate and sustain a networked community that accelerates improvement? The whole enterprise is itself a learning through doing experiment. We are indeed doing improvement research on ourselves. Stay tuned.
Carnegie Ideas Gaining Traction
The notice from the U.S. Department of Education’s Institute of Education Sciences for five-year funding contracts for each of ten Regional Education Laboratories contains language familiar to Carnegie. The solicitation reads: “The purpose is to enter into contracts with entities to establish a networked system … .” Further, after expanding on the expectations of the labs’ mission to build the research capacity and knowledge bases in their states and districts, the call is that they “carry out these priorities primarily by organizing … networks of practitioners, policy makers and others in ‘research alliances.’”
The promotion of networks and alliances echoes our call for a networked improvement community framework where research and practice communities join to accomplish improvement at scale. Carnegie has made a concerted effort to provide leadership in the R&D field and the IES language is an indication that our ideas are gaining traction. We have been operating as a “thought partner” for the Knowledge Alliance and its president Jim Kohlmoos to encourage its members, who are mostly these federal education laboratories, to work together using a networked approach since 2008 when Kohlmoos interviewed Tony Bryk, just named as Carnegie’s president, for a video presentation to be shown at the organization’s summer retreat.
“It seems clear that the basic framework for the IES solicitation is informed by Carnegie’s vision,” Kohlmoos said, “just as Tony Bryk and Louis Gomez’s seminal work on reinventing R&D has been the catalyst for much of our collective visioning and thinking over the past several years.”
In the past few years, organizations like ours have looked to the Institute for Healthcare Improvement (IHI) as a model for employing improvement research to support sustainability and scaling efforts in various fields. There are many good reasons for this. IHI, created by Don Berwick and colleagues in the late 1980s, has executed its small tests of change and rapid prototype testing to support adoption of patient safety practices that engaged 7,000 U.S. hospitals to prevent more than five million incidents of medical harm over a very short period. Currently some of the world’s finest hospitals and medical practices have embraced IHI’s ideas and techniques to improve patient care. Developing, introducing and sustaining needed innovation and systems change is something they clearly know how to do.
And more importantly, although there are adaptations needed for those of us in education to use the tools of healthcare improvement, there is much we can learn and use from that work.
As Carnegie President Tony Bryk, Senior Partner Louis Gomez, and Associate Partner Alicia Grunow write in a recent essay (PDF), “a core set of principles undergird (the research on health care services) and forms a science of improvement.” They write that the IHI work provides necessary frameworks for our efforts in education improvement. They also posit that many other fields in addition to education can benefit from IHI’s approach to improvement research. “Extracting core ideas and translating them into more productive institutional arrangements for educational R&D pose important questions for learning scientists, organizational sociologists and political scientists interested in how expertise networks advance social improvement,” they write.
Indeed, Carnegie staff continues to work with IHI to adapt and apply tools like 90-day cycles, driver diagrams and improvement maps, the support mechanisms for mapping a complex problem-solution space and engaging a community around a problem of practice. Bryk and colleagues outlined the similarities between healthcare and education: “Like education, health services are carried out through complex organizations. Like physicians, school and college faculty expect to have discretion to determine how best to respond to a particular set of presenting circumstances. Both enterprises are human and social resource intensive, and both operate under largely decentralized governance arrangements.”
However, despite the value and efficiency of learning from the success of others, the alignment of tools to problems of practice need a bit of maneuvering to ensure an appropriate fit across fields. In an article published in the January/February issue of Educational Researcher by Anne Morris and James Hiebert (download PDF), a strong case is made for using “a science of improvement” to create shared instructional products in order to improve teaching. They too studied IHI, comparing standardized treatment protocols in healthcare to the standardization of instructional products in K-12 education. Morris and Hiebert recognize that there are differences encountered in improving the healthcare system versus improving classroom teaching.
In a recent interview, they talked about those differences:
In the healthcare system, it often is the case that: the goals can be stated more precisely, fewer variables are presumed to be the direct, immediate causes for the outcomes, and (in part because of these two factors) simpler assessments can be created to measure the outcomes. The goals targeted for improvement in the IHI program often focus on specific errors that occur during the practice of routine procedures. These goals can be stated precisely, and in ways that are understood immediately by all healthcare providers associated with the relevant procedure. The goals for improving teaching focus on students’ learning. (This is because, in the end, all changes in teaching must be measured against whether they help students better achieve specified learning goals.) But students’ learning goals often elude precise description and can be interpreted in different ways by different educators. For example, in elementary mathematics the goal of understanding the relationship between grouping quantities by units that increase by a factor of 10 and the place valued numerals is a learning goal at an appropriate grain size (i.e., it can guide the development of a lesson) but one that has multiple components and can prompt different instructional approaches.
For the elementary mathematics learning goal just stated, multiple causes can be identified for students’ success or failure. There are a large number of hypotheses that can be offered to explain students’ learning, only some of which are related to the nature and quality of teaching.
Finally, assessing whether students achieve learning goals can be complicated, especially if the goals include ‘conceptual understanding.’ Multiple assessments are needed to conclude how deeply and broadly students understand important concepts.
Hiebert and Morris conclude that, “Although these differences are important, they do not undermine our confidence that lessons from improving healthcare can be used to examine strategies for improving classroom teaching. We believe there are enough similarities between the systems that studying healthcare improvement is worth the effort.”
Carnegie vigorously agrees. However, Carnegie staff and the IHI coaches have also realized that there would need to be adjustments in the IHI approach because of the differences in our work in community colleges, a sector that shares aspects of the problems in sustaining change with K-12 but has its own culture to address. Associate Partner Alicia Grunow explained that Carnegie’s initial attempts to explore and talk about the use of some of the IHI improvement tools in education has both challenged and appealed to many of the ways of doing business in the education world.
“One benefit of improvement science is that it provides a rigorous way to explore ‘how’ questions, those which are neither trivial nor typically addressed by traditional research methodologies,” she said. “But they also require a different paradigm for thinking about measurement.” She explained that from the practitioner point of view, improvement methodologies provide a discipline to guide the large amount of effort that goes into making changes in classrooms, schools and districts. But this discipline requires different processes for deciding what to work on and “rolling out” changes in order to learn from smaller, iterative tests of change. “This tension between ‘fast enough to be useful’ and ‘slow enough to be thorough’ is a tension that affects any sort of bridging between research and practice in education,” she said. “Hitting that sweet middle ground requires everyone to work in ways that are often slightly out of their comfort zone.”
Grunow agreed with Morris and Hiebert in that a key difference between improvement research in education and in medicine is that in education there are fewer agreed upon measures of what constitutes successful teaching and learning. She said that the measures currently used and developed may be useful for a variety of purposes, but are not likely to generate the kind of information needed to support improvements in the day-to-day work of instruction in classrooms. In health services, however, the measures—patients with improved health or no accidental deaths—are much clearer.
Grunow added that some of the culture shifts that need to happen for improvement science to be embraced by education are very similar to those that faced medicine 10 years back when healthcare professionals too often accepted that “complications happen” much as too many educators accept that “some students simply won’t learn as much as others.” The present culture of working on solutions decoupled from problems, for example, pointing fingers at some individual teachers as “bad apples” instead of working on the systems within which all teachers work, and an acceptance of failures as unavoidable is not unlike what the quality movement in healthcare faced at the beginning of their efforts. “Their success facing what is in some ways an analogous culture gives me hope for improvement in our enterprise,” she said.
A New Look at Scale and Opportunity to Learn
“There still remains room for optimism in technology’s ability to transform education, in part, because of its almost unique role in enhancing all students’ opportunities to learn,” write Carnegie Senior Partner and University of Pittsburgh professor Louis Gomez, Carnegie Visiting Resident Scholar Bernard R. Gifford and Kim Gomez, also of the University of Pittsburgh. The authors prepared the paper, “Educational Innovation and Technology: A New Look at Scale and Opportunity to Learn,” for the Aspen Institute’s Congressional Program Conference, “Transforming America’s Education Through Innovation and Technology.” It is now part of the Carnegie Foundation’s elibrary.