Things have been quiet (okay, completely silent) around Geek Studies lately, as I’ve recently moved from Philly to Boston, started teaching a graphic design class, and focused more closely on work that takes me away from the blog. And things will likely remain quiet until I defend my dissertation—but every now and then, something will occur to me that will make me need to speak up again.
What’s got me blogging now is a funny coincidence: Just as I was commenting to my friend and fellow Annenberger Moira about the concept of “digital natives” needing some revamping (if not outright rejecting), she pointed me to “Generational Myth,” an article in the most recent Chronicle of Higher Ed by Siva Vaidhyanathan. (Link updated to direct to the free version—thanks, Siva!) The long and the short of the article is precisely what I have been observing in my own class: The generation of “digital natives” might not be so native to the digital as many presume, and is certainly not so homogeneous as to be able to be described by a single way of thinking across the board.
Some key excerpts:
College students in America are not as “digital” as we might wish to pretend. And even at elite universities, many are not rich enough. All this mystical talk about a generational shift and all the claims that kids won’t read books are just not true. Our students read books when books work for them (and when I tell them to). And they all (I mean all) tell me that they prefer the technology of the bound book to the PDF or Web page. What kids, like the rest of us, don’t like is the price of books. […]
By focusing on wealthy, white, educated people, as journalists and pop-trend analysts tend to do, we miss out on the whole truth. […]
[Esther] Hargittai explained why we tend to overestimate the digital skills of young people: “I think the assumption is that if [digital technology] was available from a young age for them, then they can use it better. Also, the people who tend to comment about technology use tend to be either academics or journalists or techies, and these three groups tend to understand some of these new developments better than the average person. Ask your average 18-year-old: Does he know what RSS means? And he won’t.”
Agreed.
What got me thinking about all this was teaching applications in the Adobe Creative Suite to seniors in college. I expected that they would not know how to use Photoshop, but that they would be proficient with basic document editing in Microsoft Windows. I was mistaken. They certainly know how to open a web browser to access Facebook, a webmail client, and Google Image Search, but the interfaces that serve up such applications seem alien to them.
In teaching Photoshop, I was getting questions like, “How do I resize a window so that I can see the other windows behind it?” Photoshop uses the same window resizing rules as any other Windows application: minimize/maximize buttons near the “close” button in the upper right corner of each window, and a space to click and drag to resize in the lower right corner. I believe I answered this question for no fewer than 10 students (out of 60, between three sections), and those are only the ones who spoke up to ask.
Another question that came up comparably frequently: “How come it keeps pasting the thing I pasted before?” The answer is that you need to copy something new in order to paste something new, and many students thought it was sufficient just to select the new thing they wanted to paste. Again, this works no differently from any other program in Windows. I also noted that the majority of students in my classes go to the Edit menu for common functions like copying and pasting—sometimes hunting around in the menu to find them—rather than using keyboard shortcuts.
And, for what it’s worth, they certainly don’t prefer clicking back and forth between Adobe applications and the PDFs I give them for handouts (in my perhaps misguided attempt to save a tree). These students are not ready to give up their paper media, and I certainly don’t blame them.
These are not dumb students: They have asked some good questions and have come up with some clever visual solutions to the projects I lay out for them. Several are also very enthusiastic and quite excited to learn such programs: A number have commented to me that this is like nothing they’ve ever done before, that they could fool around with these programs for hours, that this is now their favorite class. I honestly don’t think that has as much to do with me as it does with getting to learn some truly useful skills and practice with some surprisingly usable tools.
I would also echo Siva’s repeated insistence that “digital nativism” is more of an economic issue that some writers typically recognize. I don’t think these students are “poor,” per se, but chatting with one student last night after class helped put things into perspective for me. She mentioned that she had transferred to this school from another college (about a mile away) because of the steep tuition at the other place. She was glad, she said, to learn these design programs now because they seemed more commonly known and less commonly taught for beginners at her old school. (The class I’m teaching now isn’t a design class for art majors, but a required course teaching basic design literacy for all Communication and Journalism students.)
Of course, I’m not going to say there’s no truth to the idea that increased access to digital technology has changed the way that young people think about communication and information. As I said, these students are perfectly capable with Google Image Search, and that certainly changes the way that they approach a design project compared to how design students would have approached such a project 20 years ago. But my experiences teaching have made me curious to read (and perhaps conduct, after the dissertation) some research about how well college students really understand visual digital interfaces—or, perhaps, which interfaces they truly understand.
I wonder, for example, whether the applications that the average college student understands most intuitively are web apps more so than the browsers and operating system windows used to navigate to them. Has the operating system interface become so transparent, with so much of interaction moved to the content of the individual application, that simple, more or less universal functions like “Ctrl-C” and window minimizing now seem acceptable to ignore?
I suspect that someone else has already written on this, so I plan to hunt around in some journals before I muse much further on it. I just felt the need to make a note of this before I get caught up in teaching Illustrator and InDesign, and some other anecdote gets me thinking about something else entirely. As many have warned me, teaching can be as much of a learning experience for the instructor as it is for the students.