WeekNotes for 2024 Week 39: Attention as Pedagogy

weekNotes

Weeknotes are a habit I’m cultivating where I share what I’m working on or thinking about, primarily in my professional life, without worrying too much about the ideas being full-formed.

thinking about / working on:

Show Up & Make

I’ve rebranded the Makerspace co-working sessions (which are also my sort-of office hours) from “Makerspace Sandbox Sessions” to “Show Up & Make” so the idea is easier to understand. This name also aligns with a long-running program called Show Up & Write that the library runs with our centre for teaching and learning, which is nice since I am now also collaborating on this project with Alexis Brown, a faculty member from CELT.

It’s only the second week and I’ve already done 3 of these sessions. People are coming! Not many yet, just 2-3 each sessions, but it’s a start. Most interestingly, students have come who have never been to the Makerspace before, and they are coming not with projects or ideas, but with questions about what they can use the makerspace for and how. This is excellent but unexpected: these events were designed for people with ongoing projects or questions.

This is yet another example of something I learn again and again: there are many people who need permission to come into unfamiliar spaces, and events provide that permission.

Attention as pedagogy / Attention as love

For a long time, I’ve been thinking about how attention is the most valuable resource I can offer. Sure, I have some expertise in some areas, but even where I have something useful to share the prerequisite is still paying enough attention to know how I can help (see: reference interviews).

I increasingly find the thing students most want from me is my attention. They don’t really want to explain an idea to me so I can help them than they want to explain an idea to me so they can see how that idea fits into their emerging sense of self; I provide a mirror for them to explore their own emerging sense of self and knowledge. The worst thing I can do is step all over that process by taking up all the space.

What I can do is be interested. I can give them my attention. And sometimes I might be able to give them some advice or suggest they think about something else.

photos

links

How to Raise Your Artificial Intelligence: A Conversation With Alison Gopnik and Melanie Mitchell

A very common trope is to treat LLMs as if they were intelligent agents going out in the world and doing things. That’s just a category mistake. ==A much better way of thinking about them is as a technology that allows humans to access information from many other humans and use that information to make decisions==. We have been doing this for as long as we’ve been human. Language itself you could think of as a means that allows this. So are writing and the internet. These are all ways that we get information from other people. Similarly, LLMs give us a very effective way of accessing information from other humans. Rather than go out, explore the world, and draw conclusions, as humans do, LLMs statistically summarize the information humans put onto the web.

Baking Bread, Finding Meaning

In short, they are distinguished by the sort of engagement they elicit from those who take them up. ==In Borgmann’s view devices are characterized by how they combine a heightened availability of the commodity they offer with a machinery that is increasingly hidden from view. Basically, they make things easier while simultaneously making them harder to understand.== Devices excel at making what they offer “instantaneous, ubiquitous, safe, and easy.”

Focal things, not so much. ==Focal things ask something of you. Borgmann speaks of their having a commanding presence. They don’t easily yield to our desire for ease and convenience. A radio and a musical instrument both produce music, but only one asks something of you in return.==

Why Aren’t Smart People Happier?

One way to spot people who are good at solving poorly defined problems is to look for people who feel good about their lives; “how do I live a life I like” is a humdinger of a poorly defined problem. The rules aren’t stable: what makes you happy may make me miserable. The boundaries aren’t clear: literally anything I do could make me more happy or less happy. The problems are not repeatable: what made me happy when I was 21 may not make me happy when I’m 31. Nobody else can be completely sure whether I’m happy or not, and sometimes I’m not even sure. In fact, some people might claim that I’m not really happy, no matter what I say, unless I accept Jesus into my heart or reach nirvana or fall in love—if I think I’m happy before all that, I’m simply mistaken about what happiness is!

All this has happened very quickly, which may make it seem like we’re careening toward a “general” artificial intelligence that can do all the things humans can. But if you split problems into well-defined and poorly defined, you’ll notice that all of AI’s progress has been on defined problems. That’s what artificial intelligence does. In order to get AI to solve a problem, we have to give it data to learn from, and picking that data requires defining the problem.

Leave a Reply

Your email address will not be published. Required fields are marked *