Let me just say it upfront: I absolutely love data.
Spending my days working with kids, teens, and adults and watching the incredible magic of learning unfold is what drew me to this field in the first place. And while it feels like magic, it’s really the result of intention, careful planning, thorough analysis, and continuous iteration. That initial “I’ll take the job” was all about connecting and making an impact. But what’s truly kept me here? What keeps me engaged, energized, and constantly growing as a clinician?
Data.
I know some might call this a bold statement, but I truly believe data sits at the very heart of applied behavior analysis. Our entire science breathes through the moment-to-moment, day-to-day decision-making we make, all thanks to our ability to capture and interpret behavior. But it’s not just about the immediate. Data also guides our choices over weeks, months, and even years. It’s how we track progress, evaluate long-term effectiveness, and figure out if we’re really building meaningful, lasting change. Data tells us when to start, when to stop, when to pause, and when to try something new.
More than just numbers on a sheet or a graph on a screen, data forms the foundation for real change; for our learners, our teams, and yes, even for us.
Data: The Stories They Tell (If We Listen)
Let’s stretch our understanding of data far beyond just assessments and graphs in CareConnect. Data can actually reveal so much:
- When an intervention is truly hitting the mark.
- If a Foundational Plan is effectively supporting a learner.
- When a team member is engaged, or perhaps silently struggling.
- Whether a learner is progressing, plateauing, or giving us signs that we need to change course.
- That we are, in fact, working with learners who are happy, relaxed, and genuinely engaged.
- How a family’s quality of life is improving over time.
Data isn’t static. It’s dynamic, interconnected, and highly responsive. It works best when we see it as part of a living system, not just isolated points. When we connect data across time and context, we gain insights into patterns and trends that a single snapshot (or even just one data point) simply can’t provide.
A System of Measurement: Seeing the Whole Picture
To truly harness the power of data, we must think in systems. Not all data serve the same purpose, and treating them as if they do would be a mistake. So, let’s break data down into different levels or categories. This gives us a solid framework for how to collect and analyze it.
Macro Data
These are the big-picture indicators that guide the overall direction of treatment (Johnson et al., 2010; Smith & Fuller, 2023). They include:
- Norm-referenced tools like the Vineland.
- Criterion-referenced assessments such as the VB-MAPP and ABLLS-R.
- Developmental or outcome-based benchmarks.
Macro data shape our long-term vision and help us understand how our learners are functioning in broader, real-world contexts.
Micro Data
This is the moment-to-moment, session-level information (Johnson et al., 2021; Smith, 2020; Smith & Fuller, 2023; Wilson, 2024):
- Discrete trial data.
- Data collected on each step in a task analysis.
- Family collaboration data, on how often successful meals happen in the home.
Micro datatell us how our procedures are working in real-time. They are essential for adjusting teaching right there, within sessions.
Meso Data
Meso data help bridge the gap between short-term teaching and long-term outcomes (Smith & Fuller, 2023; Wilson, 2024). Meso data can take the form of:
- Probes that assess readiness for upcoming skills.
- Performance on combined or functional skill sets.
- Probes that assess emergence of new, untaught skills.
These data give us a sense of direction and help us anticipate future needs.
Meta Data
These are the data about the system itself. They measure the quality and consistency of our service delivery (Smith, 2020):
- Cumulative counts of mastered targets.
- Aggregate progress on a graph.
- Weekly Clinical AI Reports that include aggregate data on targets mastered, percentile schedules improvement criteria met, and staff performance.
Meta data help us evaluate how well we’re implementing and maintaining our clinical systems.
And within each of these levels, there’s always room for qualitative data too. Anecdotes, observations, reflections from families and teams. These all offer context and depth that numbers alone can’t capture.
Connecting the Dots: How All Data Levels Work Together
Putting it all together is where the real insight comes from. Macro data, like the Vineland, directly inform programming. We develop specific treatment goals, micro data, that align with these larger domain and repertoire needs, then create specific programs (micro data, again) that work together toward those goals. We can capture meso data by probing those treatment goals or by analyzing how various program goals are progressing and how they relate to one another. We can also evaluate the program holistically by reviewing cumulative targets mastered each month. That, paired with the macro data gathered during the re-assessment of the Vineland, gives a powerful, layered view. When applying this process across treatment goals and the program more globally, we gain a truly comprehensive understanding of how our learners are doing. Each of these data sets works together; they aren’t independent. They’re interconnected. And by reviewing them together, we can make much better-informed decisions on each individual component.
It Only Matters If We Use It
I could talk endlessly about the beauty of measurement, but here’s the truth: collecting data isn’t enough. Our job requires us to analyze what we gather, interpret what we see, and then act on it. That’s where the real power lies.
The science of behavior analysis truly begins when we start asking, “What does this mean for this learner, and what do I do next?” Data without action is just archived behavior. But data, coupled with thoughtful analysis and clear intention, can genuinely shape lives.
We’re in a field that uniquely allows for both precision and personalization. That’s a rare combination! We can create systems that respond directly to the individual in front of us while still being firmly grounded in scientific integrity.
Living the Data at Centria
At Centria, we’re fortunate to have access to tools that genuinely support this multi-level system of measurement:
- Graphs in CareConnect help us see our micro data in real-time.
- Assessments, like the Vineland Adaptive Behavior Scales, provide macro data that frame our goal selection.
- Weekly AI Clinical Reports offer us meta-level summaries and cumulative counts.
- Our parent training systems give us insight into both meta and meso data through engagement tracking and session outcomes.
Each piece adds to the story. When we read them together, we gain a much better understanding of where our learner is, how far they’ve come, and where we need to go next.
Final Thought: The Learner Is Never Wrong
If the data are unclear, inconsistent, or not showing progress, we absolutely must remember this fundamental truth: the learner is never wrong. The behavior we observe is real. If something isn’t working, it’s our system, our environment, or our instruction that needs to shift.
And that’s not a flaw in the work. It’s the very essence of what makes behavior analysis so powerful. We aren’t bound to a rigid script. We’re guided by data, by the learner themselves, and by the endless opportunity to get better, always.
That’s what makes this work worth doing. And that’s why, even after all these years, I still love data.
About the Author
This blog was written by Kristin Smith. Kristin Smith is a Board Certified Behavior Analyst with over 20 years of experience in measurement, curriculum design, content analysis, systems work, and assessment. Currently pursuing her doctorate in Educational Technology, she’s passionate about connecting technology with clinical practice in meaningful ways.
Throughout her career, Kristin has made significant contributions to the field, particularly in curriculum design, client dignity and autonomy, and measurement systems. Her approach centers on applying rigorous scientific methodology to work that is fundamentally human-centered, ensuring both precision and personalization.
As an active contributor to the behavior analytic community, Kristin presents, writes, and shares research on instructional design and the critical importance of client assent. Her extensive direct experience with individuals across the lifespan, including children, teens, and adults, in diverse settings, combined with her strong foundation in measurement, consistently informs her systems-level thinking and impactful content development.
References:
Johnson, K., Street, E. M., Kieta, A. R. & Robbins, J. K. (2021). The Morningside Model of generative instruction: Bridging the gap between skills and inquiry teaching. Cambridge Center for Behavioral Studies, Inc.
Smith, K. “Supercharge your clinical programming with multi-tiered measurement,” CentralReach (blog), February 20, 2020, https://centralreach.com/blog/supercharge-your-clinical-programming-with-multi-tiered-measurement/
Smith, K., Fuller, T. C. (2023, January). Multi-level measurement: How to avoid analysis paralysis and make the most of the data you collect [Webinar]. Available from: https://institute.centralreach.com/learn/course/multi-level-measurement-how-to-avoid-analysis-paralysis-and-maximize-client-progress/multi-level-measurement-how-to-avoid-analysis-paralysis-and-maximize-client-progress/multi-level-measurement-how-to-avoid-analysis-paralysis-and-maximize-client-progress?client=centria-healthcare&page=4
Wilson, M. (2024). Finding the right grain-size for measurement in the classroom. Journal of Educational and Behavioral Statistics, 49(1), 3-31.