Students are bad at Google search
It is a common complaint that students perhaps over-rely on Google, and are not familiar with, or used to using, alternative search engines and databases. However, the ERIAL project — a two-year anthropological study — discovered that students also fail at using Google. The problem is that most students do not completely understand how Google works, or the nature of the algorithms at work. A presentation paper Search Magic: Discovering How Undergraduates Find Information by Andrew Asher (PhD) is seminal reading in this regard: Asher says, “By shaping the processes through which information is found, and by extension, becomes known, search algorithms perform an epistemological function. By structuring the discovery of information, search algorithms express a form of Foucaultian disciplinary power that provides the scaffolding for how students complete their academic work and profoundly structures the way students acquire knowledge.” Dr. Asher’s research and findings I feel merit far deeper thought about how algorithms dictate what we learn. ISTE’s standards for students include “Knowledge Constructor: Students critically curate a variety of resources using digital tools to construct knowledge, produce creative artifacts and make meaningful learning experiences for themselves and others.” The question research such as ERIAL is asking is: are the tools of that curatorship trustworthy replacements for real-life librarians? Is there not a point at which we enter a Schroedinger’s cat scenario - where the students presence within the search algorithms’ calculation, affects the results? Leaving the metaphysics there for a moment, there are a number of useful lesson plans to engage in a K12 setting that should not be overlooked. Naturally many of these lessons do indeed fall under the broad term of digital literacy, yet in my view are pragmatic and mechanistic enough to be considered computer literacy. I welcome debate around my splitting hairs in the comments.Basic software functionality: Not so great
In 2012, Change the Equation and the The Organisation for Economic Co-operation and Development ( OECD) conducted a global research project the 2012 Program for the International Assessment of Adult Competencies (PIAAC) which tested three things: literacy, numeracy and “problem solving within a technology-rich environment” among adults. The raw data for US millennials (age 16 - 24 in 2012) in the third skill test (pg 268) makes for interesting reading. The study rated successfully completing certain computer tasks using four proficiency levels: -1, 1, 2 and 3. The percentage results are shown below, courtesy of this CNBC article.