messier@terminal:~/think$_

i work in AI. that's why i'm saying this.

i'm a nonbinary AI engineer building systems at the intersection of intelligence, creativity, and care. and i've become increasingly convinced that the way we're introducing these tools; especially to children; is doing quiet, serious harm.

this page is for anyone who wants to think about that with me. or bring me in to think about it with others.


what concerns me // expand to read

01cognitive offloading+

when we outsource thinking before building the capacity to think; we don't augment intelligence, we replace it. research on the google effect (sparrow et al., 2011) and cognitive offloading (risko & gilbert) shows the brain encodes differently when it knows retrieval is external. the question isn't whether tools help. it's what atrophies when we stop doing the hard internal work entirely.

02children + development+

children are being handed intelligence amplification tools before they've built their own intelligence. maryanne wolf's work on deep reading shows how the brain literally rewires itself through effortful cognitive practice. a child who has never learned to sit with a hard problem; who has only ever prompted their way through; is not ready for a collaborative relationship with AI. they're in a dependent one.

03systems + equity+

the costs of uncritical AI adoption are not evenly distributed. safiya umoja noble, virginia eubanks, kate crawford; all document how algorithmic systems encode and amplify existing inequalities. AI literacy is not a neutral skill. it's a question of who gets to understand the systems shaping their lives, and who is just subject to them.

04the inside view+

i work in AI. i help organizations implement it. i build these systems. and that's exactly why i'm saying this; not despite it. i've seen what happens when we skip the part where humans stay in the loop. when speed-to-deployment trumps critical thinking. when the tool becomes the default, not the option.


what i offer

schools

workshops on AI literacy + critical thinking for students aged 12-18. not 'how does AI work'; how to stay in charge of your own thinking while using powerful tools.

health + welfare orgs

talks on screen dependency, cognitive health, and digital wellbeing grounded in current research. framed for prevention, not panic.

companies

responsible AI introduction; what it means to bring these tools into an organization in a way that augments rather than replaces human judgment.

parents + educators

frameworks for talking to children about AI. how to model critical engagement. how to build the habits that matter before the tools become invisible.


where i stand

i am not against AI. i think the human-AI relationship, built right, is one of the most interesting things happening in the world. i want to be part of building it well.

but "built right" means humans come in with something; their own capacity for reasoning, for sitting with difficulty, for being wrong and correcting themselves. children especially need to develop that capacity before they're in a dependent relationship with a system that will do it for them.

that's the work. not fear. not prohibition. literacy.


get in touch

if you're a school, a health organization, a company, or a person who wants to think about this; reach out. i'm based in belgium, working locally first, but the conversation is open.

maramasaeva@gmail.com

linkedin

messier@terminal:~/think$ _