Guest post by Dr. Lyndsey Zurawski
Assessments Are NOT Equal. For most Speech-Language Pathologists (SLPs), completing an evaluation is a smaller part of the job. According to the 2020 ASHA Schools Survey, SLPs across all facility settings reported an average of 4 hours per week on diagnostic activities.
So, What are diagnostic activities? This could include formal evaluations, informal evaluations, scoring evaluations, analysis of evaluation, screening, also observations, and report writing. So, Does this sound more or less than the average that you spend on diagnostic activities? For SLPs with high caseloads and a limited amount of workload time allotted in their schedules, completing an evaluation can seem tedious.
That is to say, When required to conduct an evaluation whether as an initial or a reevaluation, sometimes convenience and/or speed can override best practices.
We’ve all been there when An evaluation needs to get done, and we grab the quickest evaluation tool on our shelf and administer it. We score it, write up the report, and along with observations, parent input, and classroom/intervention data we go to eligibility or ineligibility.
So, what about best practices? Administering that one quick test is NOT the best practice. However, we have all been there, and when I say “we”, I mean me!
What are best practices: Why Assessments Are NOT Equal
Firstly, I will discuss what should be included in a comprehensive evaluation.
- Case History/File Review
- Hearing and Vision Screenings
- That Observations in more than 1 sets
- Assessments in all areas of suspected disability
- Culturally and linguistically sensitive assessments
- Recommendations and Summary
Secondly, Jump to assessments in all areas of suspected disability. Do you have a favorite assessment tool? I don’t just have a favorite tool, I have a favorite battery of assessments I use. So, to clarify, part of my job is as a diagnostician conducting in-depth language and literacy evaluations.
So, My assessment battery has to be more comprehensive than what I would typically utilize for my caseload evaluations and reevaluations. But However, that should not stop me (or you) from utilizing best practices when we conduct an evaluation.
Furthermore, It is also important for us, as SLPs, to consider diagnostic accuracy which includes sensitivity and specificity. So, I’m not going to get all super nerdy and discuss the ins and outs of psychometric properties, instead, I’ll like to refer you over to a page on the Informed SLP that is easy to read and digest.
Table shows All Assessments Are NOT Equal
|Oral and written language scales
|6 thru 18
|Comprehensive language and literacy assessment
|5-0 thru 21-0
|Measures listening (auditory) comprehension and memory skills
|LPT – 3
|5-0 thru 11-11
|Measures strengths and weaknesses of language hierarchy
|5-0 thru 12-11
|Assessment of spoken analogies, morphology, syntax, reading/written language, decoding, and encoding.
|12-0 thru 24-11
|Measures spoken and written language
|7 thru 18
|Assessment of social language
As you are reading, I’m sure that you are thinking I should hurry up and get to the part where I share which tests to use and not use. but, it is not as simple as that, So, I am going to share what I use and why.
To clarify, This is based on my own clinical experience, along with information related to the factors. But, It is best practice for you to administer more than one assessment when conducting an evaluation that is because all Assessments Are NOT Equal.
All assessments are not created equal.
So, The important thing to know about sensitivity simply speaking and specificity are while no test will be at 100%, a test is judged to be “good” if it is 90% to 100% accurate; “fair” if it is accurate 80 to 89 percent of the time, while anything less than 80% is considered unacceptable (Plante & Vance, 1994).
However, Would you like a resource that does tell you this information at the tip of your fingertips? Our friends over at the Virginia Department of Education created an SLP Test Comparison Card that includes this information. In 2018, they added a few additional tests as a supplemental card.
Meanwhile, As diagnosticians, which each of us is, it is our job to pick and choose the assessment tools that can be reliably and validly identify a speech, language, literacy, and/or social communication disorder.
So, How do we do that? There’s no simple answer, but it comes down to using best practices including evidence-based practice, clinical judgment, and experience, certainly considering cultural and linguistic diversity and sensitivity in all of our evaluations.
Oral and Written Language Scales-II (OWLS-II)
Firstly, One of the most common assessment tools used, is the Oral and Written Language Scales-II (OWLS-II) it does not report sensitivity and specificity Information for other common assessments such as the Test of Integrated Language and Literacy Skills (TILLS), Comprehensive Assessment of Spoken Language- 2nd Edition (CASL-2), and the Clinical Evaluation of Language Fundamentals-5th Edition (CELF-5) are included from the Virginia DOE.
Another factor is cultural and linguistic sensitivity/bias in our assessments. Therefore, One way we can ensure that we don’t under or over-identify students is to utilize dynamic assessment, especially with students from diverse backgrounds. Ireland (2019) reported that dynamic assessment sensitivity and specificity has been documented up to 100%.
TILLS (for students 6 thru 18)
Secondly and Most frequently I am utilizing the TILLS (for students 6 thru 18) because it is a comprehensive language and literacy assessment, it has solid sensitivity and specificity, that provides detailed information and also interpreting the results, that includes relation/impact to curriculum-based content.
So, As an added bonus, there’s a report writing template that is available to use for your own reports. While it is lengthy to administer, I could only use this test and feel comfortable with my results. However, I typically supplement with other assessment tools that are not considered a global language assessment.
Oral Passage Understanding Scale (OPUS)
In addition, another assessment tool that I use, which is often little known or underrated, is the Oral Passage Understanding Scale (OPUS) for students 5-0 thru 21-0. The publisher, Western Psychological Services (WPS) provides the following information in the overview, “…is a new measure of listening (auditory) comprehension.
So, It evaluates a person’s ability to listen to passages that are read aloud and recall information about them.
This ability is therefore key to success in the classroom, as well as in social and occupational settings.
Furthermore, the OPUS also measures memory skills, which are integral to listening comprehension.” Therefore, What I like most about this assessment is the length and complexity of the passages, along with the types of questions. This assessment is closely aligned to the types of questions that students may be asked in a classroom environment.
Language Processing Test-3rd Edition (LPT-3)
So, If you are looking to delve into where a student’s breakdown within the language hierarchy, You’d like to administer the Language Processing Test-3rd Edition (LPT-3) for students 5-0 thru 11-11.
This test is fairly quick to administer and can provide insight into where a child’s strengths and weaknesses are along the language hierarchy.
Most importantly, it is a great tool to re-administer after a child has received therapy for some time to demonstrate the growth and progress in the weak areas.
Illinois Test of Psycholinguistic Abilities-3rd Edition (ITPA-3)
For another global language test (that is lesser-known), I like to administer the Illinois Test of Psycholinguistic Abilities-3rd Edition (ITPA-3) for students 5-0 thru 12-11. I got my hands on this assessment when my boss was changing positions. she told me, “this is an oldie but a goodie.” I did not understand how true of a statement that was until I administered the test, then I understood.
Importantly this test includes the assessment of spoken analogies, spoken vocabulary, morphology, syntax, reading comprehension, written language, decoding, and encoding. However, It also does not take a significant amount of time to administer this test in its entirety.
Test of Adolescent and Adult Language-4th Edition (TOAL-4)
For older students, similar to the ITPA-3, I like to administer the Test of Adolescent and Adult Language-4th Edition (TOAL-4) for students 12-0 thru 24-11. This test measures spoken and written language including word opposites, word derivations, spoken analogies, word similarities, sentence combining, and orthographic usage. Again, this test provides a solid overview of students strengths and weaknesses without taking too long to administer.
Clinical Assessment of Pragmatics (CAPs)
So, If you are looking to assess social language, and are enjoying the new Clinical Assessment of Pragmatics (CAPs) which is for students 7 thru 18. Why do I like this assessment?
Firstly, It is hard to find a solid standardised pragmatic assessment
Secondly, It has digital video scenes that depict real life scenarios (as much is feasibly possible)
Furthermore, It looks at a variety of aspects of social language skills including Pragmatic judgment (comprehension), Performance of pragmatic language (expression), Understanding context and emotions (intent of the speaker through inference, sarcasm, and indirect requests), and Nonverbal cues (facial expressions, prosody, and gestures).
In addition, it helps determine a student’s strengths and weaknesses.
School-Age Language Measures (SLAM)
A lesser-known tool, which is an informal assessment measure, is the School-Age Language Measures (SLAM) from the LEADERS Project that I use in almost every language assessment I give. So, why do I like the SLAM cards? Straight from LEADERS Project, SLAM cards are “meant to elicit a language sample that can be analyzed in the context of typical language development as well as the child’s background (e.g., educational experiences, family, linguistic and cultural background, etc). So, for this reason, no scores are included..” Now, they have even updated the site to include guidelines for analysis. And, the best part, IT IS FREE!
Other Test You should know: All Assessments Are NOT Equal
In conclusion, here are some additional tests that I’m not going to go in-depth about, but that I love having in my assessment toolkit are the School Motivation and Learning Strategies Inventory (SMALSI), the Gray Oral Reading Test- 5th Edition (GORT-5), and the Phonological Awareness Test-2nd Edition: NU (PAT-2:NU).
Now, you may have noticed I did not mention the OWLS-II, CELF-5, or CASL-2. I will and have administered all of them. But, I found that there are weaknesses within all of these assessments.
So, Let me mention a few, For the OWLS-II, often there is not enough information that can be gleaned simply from administering the Listening Comprehension and Oral Expression Subtests. In regard to the reading subtest, the rigor is just not there. The same goes for the written language subtest.
That is to say, there isn’t good information that can be obtained from administering these subtests. However, if I had a choice there are better options. In regard to the CELF-5, the primary reason I do not administer the test often is that it relies heavily on auditory input and memory.
Many of the students we work with or are assessing have difficulty with auditory memory, short-term memory, and attention which can negatively impact the scores obtained on the assessment. So, For the CASL-2, I do like the assessment, but it is lacking in some areas including diversity.
It also can take quite a long time to administer in its entirety. While the subtests can stand alone, if I am going to administer the CASL-2, I prefer to give the entire assessment so I can look at the strengths and weaknesses across subtests.
So, If all assessments are not equal how do you find the best? If you’ve read this far, I’d love to hear from you about what your favorite assessment tools are and why! Maureen and I will be doing a live chat on Instagram where we will be candidly discussing assessments and try and answer questions that you have!
You can follow me at @speechtothecore on IG, FB, and Twitter.
Guest post by Lyndsey Zurawski, SLP.D, CCC-SLP
American Speech-Language-Hearing Association. (2020). 2020 Schools survey. Survey summary report: Numbers and types of responses, SLPs. www.asha.org.
Brydon, M. (2018, September). How do we judge the quality of a standardized test? The Informed SLP.
Ireland, M. (2019, July). Dynamic Assessment. Paper presented at FLASHA Annual Convention, At Sea, FL.
LEADERS Project. (2020). SLAM The Crayons Picture. LeadersProject.Org.
Plante, E. & Vance, R. (1994). Selection of preschool language tests: A data-based approach. Language, Speech, and Hearing Services in Schools, 25, 15-24.
Virginia Department of Education. (2015). SLP Test Comparison.
Western Psychological Services. (2020). Clinical Assessment of Pragmatics. WPS.
Western Psychological Services. (2020). Oral Passage and Understanding Scale. WPS.