At the dawn of digitization in 1995, the French philosopher Jean Baudrillard observed, “Words move quicker than meaning.”
Now with the advent of AI, words, images, and video are created quicker than their meaning.
What happens to us when all of this information is created and proliferated faster than anyone can really make sense of it?
There will be only one thing left to automate: meaning. The last task of the computer is to give our lives meaning.
Here’s Nicholas Carr:
[my three books] examine how we came to apply industrial ideals and measurements — efficiency, productivity, speed, profit — to the most essential and defining of human pursuits. [One] looks at the application of those ideals and measurements to thinking; [another], to doing; and [the third], to communicating. The way computer systems have abetted the encroachment of the industrial ethos into the most intimate facets of human life strikes me as one of the most important stories of our time.
It is possible to accept the ambiguousness of a tool’s future if you choose tools that make self-guaranteeing promises.
A self-guaranteeing promise does not require you to trust anyone.
Example: apps that work on an open file format are self-guaranteeing promises, because regardless of what happens to the business or what they “promise” to do, its guaranteed that your data will be accessible because it’s already in an open format.
I’d forgotten what it feels like [to] make pull requests and break things and not ask for forgiveness because the UI can be better, it must be better. There’s a momentum to this sort of work that I crave deep down in my bones because it doesn’t rely on meetings or six months of quarterly planning or going up the chain of
The book is free to read. Properly free. Not the kind of “free” where you have to supply an email address first. Why would I make you go to the trouble of generating a burner email account?
LLMs are a “lubricant for the crushing weight of complexity”.
Say what you will about the mountains of technical debt at your average startup, but that technical debt was hard-won, written by hand over the course of years. With modern tools, we can greatly accelerate that process.
You thought npm made it easy to quickly pile on complexity? Let’s see where code-assistant LLMs get us.
Tom also poses a good question: where does all this accelerated productivity take us?
Here’s my question for companies using LLMs: when will I see your productivity gains and cost savings reflected in my bill?
I prefer to manually syndicate. I.e. copy & paste a link and write a thoughtful gist. I think scripting that would be less engaging. Personally I don’t care for automated feeds. I can’t expect others to interact with mine if I automated it.
Me too!
I have to admit — and I realize I need to just get over this, I suppose — I feel this same hesitancy when others post and you can tell it’s automated (like cross-posting). It’s this feeling of, “Should I respond to this? Idk…” It’s like you can feel just a tinge of the “A robot posted this, do I respond?”
planning for living is nearly always a fool’s errand, with responding, improvising, adapting, and experimenting all better methods to follow
Art is in the little things we all do.
Art is in the weird and wonderful websites I make occasionally, which bring people (including myself!) a moment of joy when they land on them.
Art is in the food I make...I show love through my food, so I bake for people...Hours of preparation is gone in minutes, but it’s worth it.
there’s even a bit of art in shitposts and stupid jokes that I make knowing that people will roll their eyes but it gives me a little bit of joy to think about it.
Amen.
The human brain itself — that mysterious maker of metaphors — has through the ages been portrayed as (a) a hydraulic pumping system, (b) a clock, (c) a telephone switching network, (d) a digital computer, and, now, (e) a large language model. In constructing machines, we also construct ways of seeing the world, and ourselves.
We build all these systems and we complain about them as if they’re out of control, as if they’re controlling us, but we build them.
Analogous to a code base, no?
We complain about the state of the world, but guess who constructed the world in the state that it’s in?
technology is a repository of human desire, a full critique of any machine needs also to be a critique of human desire. We’re the machine’s makers before we’re its victims.
Could someone please get me a mirror.
The human brain…has through the ages been portrayed as (a) a hydraulic pumping system, (b) a clock, (c) a telephone switching network, (d) a digital computer, and, now, (e) a large language model. In constructing machines, we also construct ways of seeing the world, and ourselves.
Our tools are mirrors to ourselves.
We build all these systems and we complain about them as if they’re out of control, as if they’re controlling us, but we build them.
[Insert analogy here to complaints about the state of the web by web developers.]
technology is a repository of human desire, a full critique of any machine needs also to be a critique of human desire. We’re the machine’s makers before we’re its victims.
A horizontal frame places a person in a landscape. It emphasizes the ground in which the figure stands. It provides context…Verticality erases the landscape, the ground, the context. The figure stands alone, monumental in its solitary confinement.
We change our tools and they change us.
Generally speaking, I don’t think redesigns or rebrands are constructive. Change tends to jazz up interest and rebrands and redesigns are no exceptions, but I’m strongly of the opinion that brands genuinely don’t matter that much – whatever positive associations a brand has comes from consistency – and if companies invested the same time and effort into actually serving their customers, they would see the same effect.
[Insert Jim’s seal of approval here.]
The art of hypertext is writing:
social media has largely kneecapped true hypertextual writing by not enabling it...(Instagram, by far the world’s most popular such social network, doesn’t even let you paste hyperlinked URLs into the text of posts.) The only links that work like web links, where readers can just tap them and “go there” are @username mentions. On social media you write in plain un-styled text and just paste URLs after you describe them. It’s more like texting in public than writing for the real web. A few years ago these social networks...started turning URLs into “preview cards”, which is much nicer than looking at an ugly raw URLs. But it’s not the web. It’s not writing — or reading — with the power of hyperlinks as an information-density multiplier. If anything, turning links into preview cards significantly decreases information density. That feels like a regression, not progress.
When we tell ourselves “I have to…” the reality is usually always “I choose to…because…”. For example:
“I have to stick to my lane,” vs “I choose to stay silent because I don’t want to risk my promotion”
There’s always a choice.
The “have to” narrative positions us as repositories of instructions made elsewhere, as if we were just programs following the code we’ve been given. It conditions us to accept more and more instructions over time, as we become accustomed to that programming.
It leads to:
what happens when you refuse to acknowledge your own choices is you eventually forget who you are: you become accustomed to having so much decided for you that you forget what it means to decide for yourself. You have a hard time knowing what it is that you want, because it isn’t presented to you as an available option.
Whereas in contrast:
The “choose to” narrative has no illusions about our power and recognizes that we are small players in a bigger, and certainly unjust, world. But we are not machines. And maybe we don’t like the choices available to us, maybe we wish there were others within reach. But once we accept that there are choices to make, we may notice where we have some room to maneuver, some space to play with, some opportunity or avenue or loophole we can exploit.
As you build your craft…you develop ever more ideas about what’s possible in your work. As your skill grows, so too do your ambitions, such that your taste always and forever outstrips your abilities.
Also love this:
The work of creativity... [is] not what you create, but who you become in the act of creation.
Making a personal website is not about what you make, but who you discover while making it personal.
An expectation of abundance breeds profligacy, a willingness to waste things. An expectation of scarcity breeds frugality, a concern with using things judiciously.
Makes me wonder: What can bring an expectation of scarcity to the way we build on the web?
Don’t worry about all the energy we’re sucking into our data centers, because our data centers are going to make energy free. Don’t worry about all the money we’re amassing, because we’re going to make everyone prosperous. Don’t worry about all the time you spend looking into the screens we’ve given you, because through those screens lies paradise.
Remember when one of the arguments for “going digital” was its eco-friendly nature? “Think of all the trees you’ll save!” Now we have AI companies that want to go nuclear for power.
The more compelling and interesting reason that most writers seek out readers is…we receive our writing as a gift, and so it must be given in turn. We write because something needs to be expressed through us, and only by giving the writing to a reader is that need fulfilled
Love it.
your writing will eventually reach people who don’t understand the context, but will engage with it anyway, and expect you to engage with them in turn.
This is social media. I don’t know how we arrived here, but undergirding pattern is: “Here's a link to something on the web, and here’s a comment box. Everyone weigh in with your opinion.”
We are all, to borrow from Byung-Chul Han, entrepreneurs of ourselves—whether willingly or reluctantly, optimistically or despairingly or, more often than not, all of the above.
Applicants are applying for jobs with LLM-generated representations of themselves. Recruiters are combing through applications with LLM-assisted tools. They’re passing each other like ships in the night.
the hiring process has become entirely too much and far too inhumane. Treat people like machines, and they will behave like them.
We’re desperately seeking humans with dehumanized tools and processes.
at the end of the day, you’re not trying to fill a job quota; you’re trying to find future colleagues. A dehumanized and dehumanizing hiring process is not going to generate a productive collaboration at the other end.
I feel like this describes so many apps today:
the app is riddled with ads, including casino slots and regular 18+ "chat and play" apps...Free users can download smaller 1080x1920 versions of wallpaper, but only after they sit through several ads and then try to click the impossibly small "x" positioned on the bleeding edge of the display.
Have native app stores learned nothing from the web?
Love this perspective though:
If my wallpapers were slotted next to gambling and sex chat apps, I would be beside myself. For me, the ability to display and manage my wallpapers is as fundamental as the product itself
Granted, there’s context to this perspective:
My wallpapers being free is an exception, not a rule. I do it because I have stable work and have chosen to dedicate my off time and resources to producing them. I see what I make as a gift to the community I care deeply for.
That’s pretty much how I feel about blogging now-a-days, even in the face of AI-scraping bots.
Sean has a great article here on the “randomness” of computers generally, with this observation on AI:
generative AI, too, can act as a force for homogenization. Nothing any model produces is truly random. After all, generative AI is only novel insofar as its training data allows, which means it can only “remix” that which it has already “seen” in the past. It feels novel to us because none of us has seen nor read the entirety of the Internet.
Every time I hear the “novelty and creativity is rare” line I get so angry I have to stand up and pace to calm down.
lol.
novelty and creativity is not rare; it’s unfunded.
Every person ... has ideas that they’d like to do, but avoid because nobody has the resources to take a risk on something unprecedented.
Insert line here about how a data-driven mindset encourages the suppression of novelty, because there can be no data for the novel.
The issue is that novelty, by definition, doesn’t have a track record, which means that you can’t predict how it’ll go, but given the nature of the economy we live in, odds are that it will not make enough money to pay for itself.
Novelty isn’t rare. The money you need to support its exploration is.
Making genuine novelty is all too often career suicide. We could have so much more, so much more variety and diversity, if we weren’t so utterly dominated by an economic system that values homogeneity and marketability over all.
Just because someone is not using your service does not mean they want to.
The other side of this coin is true too, I think: just because someone is using your service doesn’t mean they want to.
Also love this framing:
Customization lives at the heart of accessibility.
Telling someone to make something “accessible” might sound like a task. Telling them to make it “customizable” sounds fun.
Is the Trojan horse for accessibility customization?
blogging is as much about human relationship as it is about the content. I read other bloggers because I care about what other humans think about topics I also find of interest. It is exactly this human impression, and the uniqueness thereof, that I am after when I read others’ blogs. I’m not interested in reading what machines have to say;
I think this also articulates why I don’t love most corporate blogs: the individual voice is lost and subsumed by the group (corporate) voice. What I’m really after is the individual point of view.
We want speedy internet and fast-loading services because we want to stop pushing buttons and opening accordions as quickly as possible
smartphone ownership is often a deciding factor for access to many cultural experiences, or at the very least, a differentiator in the level of service one now receives in a multitude of commercial interactions.
As someone with older parents who are increasingly ostracized from society because of their inability (and also unwillingness) to operate a smartphone, this is spot on.
As smartphone-oriented infrastructure continues to seep into everyday life, it begets ever-greater dependence on smartphone technology, and thus ever-greater dependence on the producers of this technology. Those who cannot afford smartphones, or those who simply choose to live without smartphones, find themselves increasingly excluded from full participation in society, often with no viable alternative. Like with the automobile, we may soon become locked into this way of being. Perhaps we already are.
For nearly two centuries, we’ve embraced the relentless speeding up of communication by mechanical means, believing that the industrial ideals of efficiency, productivity, and optimization are as applicable to speech as to the manufacture of widgets. More recently, we’ve embraced the mechanization of editing, allowing software to replace people in choosing the information we see (and don’t see). With LLMs, the industrialization ethic moves at last into the creation of the very content of our speech.
All this posting is tiring. If only we could have something do it for us!
Having an app fiddle with your writing now seems normal, even necessary given how much time we all spend messaging, posting, and commenting. The endless labor of self-expression cries out for the efficiency of automation.
Which leads to the “industrialization of human communication”.
LLMs give us ventriloquism in reverse. The mechanical dummy speaks through your mouth.
there's nothing to sell if the art doesn't happen in the right way. It has to be protected. And it can't happen on the same kind of a timetable that business can. It's just a different thing. Art doesn't come in a quarterly way.
It doesn’t matter how artistic breakthroughs work — they’re not meant to really be known — just so long as they happen and you can recognize when they do.
knowing how [art] works doesn't make it work. The magic isn't how it works, the magic is the magic. The magic happens in a way that's intuitive and accidental at times, or incidental where you try lots of things and suddenly something works and you don't know why.
If you're not interested in working on it any more, it's done.
the number-one skill of every truly good senior engineer is…being unbelievably good at debugging…Because [software is] always broken. Even if it’s working, it’s broken.
The ease with which any human on the planet can reliably access and read a web document from thirty years ago on any device with a browser today is beyond beautiful.
On the other hand, when creations from less than a year ago require making changes to the original document, untangling and upgrading a rat’s nest of conflicting dependencies, installing a specific version of a runtime or build tool, and then figuring out how to open it on a device that may or may not support it, isn’t a formula for success.
Nailed it.
the character of one’s writing, especially in early drafts, includes the character of the physical artifact, which, when produced with an industrial machine like a computer, is otherwise homogenized and destroyed.
Agree. There is something beautiful and inimitable about your own handwriting.
In our rush to digitize the world, we often underestimate the value of the patina, subtle imperfections, and otherwise visible history of the physical objects we choose to digitize. And in the process of digitization, we both erase that history and then fail to recreate it.
we need not always minimize ourselves to accommodate our devices
I’m going to keep writing and making things because it brings me joy, and I might as well find some way to do so without grumpiness.
Browsers are user-agents — and so are feed readers!
Feed readers are an example of user agents: they act on behalf of you when they interact with publishers, representing your interests and preserving your privacy and security. The most well-known user agents these days are Web browsers, but in many ways feed readers do it better – they don’t give nearly as much control to sites about presentation and they don’t allow privacy-invasive technologies like cookies or JavaScript.
However, this excellent user agency isn’t well-defined, and we don’t even know if it’s consistent from reader to reader. We need a common understanding of what a feed reader is and what it isn’t,
The first rule of feed readers is we don’t talk about feed readers.
Now JavaScript (and privacy-invasive tech) will be coming for our feed readers.
Society has long hinged on photographs (and video) to give us the truth. When authorities concealed reality or it was too far away to understand, photos and videos told us the truth.
If I say Tiananmen Square, you will, most likely, envision the same photograph I do. This also goes for Abu Ghraib or napalm girl. These images have defined wars and revolutions; they have encapsulated truth to a degree that is impossible to fully express
Sure there has been fake photos and video, but they’ve been the exception. But that’s all about to change.
the default assumption about a photo is about to become that it’s faked, because creating realistic and believable fake photos is now trivial to do. We are not prepared for what happens after
No one on Earth today has ever lived in a world where photographs were not the linchpin of social consensus
Here’s how the product folks building this stuff think about it:
the group product manager for the Pixel camera described the editing tool as “help[ing] you create the moment that is the way you remember it, that’s authentic to your memory and to the greater context, but maybe isn’t authentic to a particular millisecond.” A photo, in this world, stops being a supplement to fallible human recollection, but instead a mirror of it.
So photos are just our own hallucinations? Because human memory is not very good.
A great paragraph describing the state of how the web works:
Go to The Verge (just to poke at a site I generally like) without an ad blocker, open up the Network panel in DevTools and just let ‘er rip. I’m seeing 400+ requests. That’s tracking at work. You can even just sit there and watch it continue to make requests over time, even while you’re doing nothing. JavaScript is whirring, soaking up whatever data it can, setting cookies, and blasting data along with your precious IP address to god-knows-where. All those requests are slowing down the site, costing you bandwidth, laughing at your privacy, and causing legislation that at least you have to click a giant content-blocking banner with a “yes, this is fine.” button.
Great piece about building software. So many great lines.
I am not entitled to more of a user’s time than this problem is worth
[As software makers] we do not need to overstep our relevance.
If I make the solution more annoying than the problem, they will choose to live with the problem, and that makes me part of the problem. I don’t need to make them like me or this product, I’m here to get them through this thing.
So much software exists because other software exists. It’s a recursive problem.
software, even if it solves a real problem, likely solves a manufactured problem that might have been caused by the existence of some other software
The author even touches on passwords!
I like making password reset flows simple because I don’t think anyone should need to push things out of their brains that they actually want, in order to remember the strong password they need for a system that’s not worth anything to them (don’t talk to me about password managers—most regular people don’t use them).
Building software summed up right here:
A lot of the time, we’re not there to design for [users], we’re there to “make the number go up”
But at a certain point, a hammer needs to hammer whatever it strikes, and sometimes, alas, that’s the user’s thumb.
This is how I feel about HTTP imports. They were a hammer, however much people were striking themselves in the thumb.
Creative collaboration requires effort, argument, trust, and play. The ability to fight for an idea, and then let it go. To be open, and then decisive. Knowing when to work together, and when to work apart. Cycles of action, reaction, reflection, etc.
Love that articulation — “the ability to fight for an idea, then let it go”. It’s a continuous social reconciliation..
Our job, together, is to hone and curate that work towards the exclusive vision through continuous questioning and articulation.
We define a vision by the choices we make, and we clarify that vision by the choices we reject.
Using whatever client software you want to access content published using open standards on the internet is the way the internet was designed to be. But it’s not the way it’s worked out, by and large.
Sad.
But not so with podcasts. Podcasts, more than any other medium, exemplify the original spirit of the open internet.
Maybe something like project tapestry could change this?
Maybe one day we’ll all be saying, “subscribe wherever you get your content”.
Rich Ziade:
whatever you believe about AGI, if the purpose of a technology isn’t to empower humans, then what is it? If it’s “okay” to feed people garbage knowledge invented by overheated NVidia GPUs in order to “save them time,” that’s a really cynical point of view on human beings.
Also, I like his rule of scaling:
As the number of customers increases, a company’s opinion of them goes down.
Paul Ford on the icky feeling folks are getting from “AI” products and marketing:
The human desire to connect with others is very profound, and the desire of technology companies to interject themselves even more into that desire—either by communicating on behalf of humans, or by pretending to be human—works in the opposite direction. These technologies don’t seem to be encouraging connection as much as commoditizing it.
Perfect summary of the state of AI today: it’s not aimed at encouraging productivity, creativity, or connection, but rather at commoditizing all of them.
Tolerance, skills, knowledge, and health are always with you, wherever you go. They are assets but they take up no space.
Love this idea of “making kin” over networking, contributing to your communal environment over seeking quid pro quo.
Jettisoning networking in favor of kinworking means taking a more ecological approach, one oriented towards nurturing the soil, planting seeds, providing water and sunlight—and then accepting that you have no control over what grows. This is as opposed to the strip mining orientation so common to much traditional networking, the expectation of a trade in value, of a return on the investment. The difference is between the act of contributing to the ground on which you and others stand versus negotiating an exchange that leaves the earth barren and dry. Which is not to say that kinworking doesn’t deliver, but rather that what it delivers isn’t capital but life—that connected, abundant, joyful experience of living among people and working, together, for a better world.
Using whatever client software you want to access content published using open standards on the internet is the way the internet was designed to be. But it’s not the way it’s worked out, by and large. Streaming video is largely available only via proprietary apps from each individual service. Same with streaming music.
Sad.
But not so with podcasts. Podcasts, more than any other medium, exemplify the original spirit of the open internet.
Hopefully there’s still a future where people say something like, “Subscribe wherever you get your content.”.
I’m hoping Project Tapestry explores this space...
Systems want to grow and grow, but without pruning, they collapse. Slowly, then spectacularly.
Remember those who did the invisible work of removing.
Love this change in how we frame of accountability from “the one who gets the blame” to “the one who tells the story of where and why things went wrong”.
our knee-jerk response to the question of “what does it mean to be accountable?” is too often “the person who gets fired when things go wrong.” This is a measure of accountability that equates accountability and punishment: to borrow from Sidney Dekker, it makes accountability something you settle, a debt you have to pay. When you’re accountable for a car accident, you pay the fine; when you’re accountable for not hitting the annual target, you lose your job.
In contrast:
Fortunately, that’s not the only model for accountability we have. Webster’s 1913 defines accountability as being “called on to render an account.” To render an account is to tell a story. In this way, an account becomes something you give—something you observe, come to understand, and then narrate. Being accountable in this model means being the storyteller rather than the fall guy.
But if you’re not quite ready to become VP, you can start with yourself:
So to get started, you can refrain from the demeaning self-talk the next time you do something that, in hindsight, looks like an error. Instead, you can practice asking questions like, what was I thinking and noticing when this happened? How did I respond to what I saw happening? What did I expect? How did I see myself at that moment? The point here isn’t to figure out what went wrong so you can avoid making the same mistake again. The point is to understand how the choices, decisions, and actions made sense at that time.
It all comes back to blogging:
you have a story to tell—a story full of fuckups and hard times and achievements unlocked and enough lessons for several lifetimes. Don’t keep it all to yourself.
You probably don’t really know why [something] didn’t work anyway. It feels good to imagine you do, but there isn’t a long history of people doing the same thing again, swapping this one wrong thing for another presumed right thing, and turning it all around.
You just gotta keep doing:
Next time isn’t [the time you failed] again, it’s a time that never happened before.
when you are in the thick of it...You’ll feel like you’re flailing about and you’ll want to scream or cry or both at the same time. Your boots will stick in the mud and your ropes will fray and you’ll lose your flint on the coldest night. It will be chaos. But it was chaos that birthed the universe. It is from chaos that many great stories begin. You’ll tell yours in time. First, you have to live it.
There are many kinds of value. Time, space, personal freedom. Monetary worth is only one. You choose what holds the most weight.
Mundanity and profundity often arrive hand in hand.
Withhold judgment...Think of all the people, shows, stories, and experiences you’d miss out on if you never gave them a chance. Give the world an opportunity to surprise you.
Fear is a yield, not a stop sign
The fear:
I fear that I will receive even more emails that say nothing, even more PowerPoint presentations without story....What seems threatening about AI is that it further clutters the world...Kitsch pretends to be what it is not. I don’t want to waste my time trying to give sense to carelessly generated nonsense that never had any.
The hope:
we become much more refined and attentive in the exchange of information
AI is not coming good design:
I had and have no fear [of AI putting me out of work]. Design requires thought; it is time-consuming and difficult. AI does not think, it computes.
bad images devalue the work you’ve put into your writing.
Great apps are the ones that don’t sap your energy:
You can spend 4 hours doing work that saps your energy and 4 hours where you feel empowered and, to me, good apps [make me] feel empowered and [they don’t] sap my energy.
Same for websites too!
With templates and boilerplate and starter packs and generative AI we can more easily spin-up complicated working systems than ever before...but to build something of beauty still requires ingredients that remain in short supply: time, attention, and care.
🎯
Really enjoyed this entire article. Couldn’t stop taking notes!
When I’m on a software project, I try to listen hard to what everyone is saying, to the words they choose...I ask critical questions about what people are really thinking, what we’re all hearing each other say, from the start.
Why? Code is a whole lot easier to change before it exists.
Here’s the hidden truth of education: You don’t know what you’re preparing for.
You don’t know. Your teacher doesn’t know. Your school doesn’t know. Your future employer doesn’t know. Nobody knows. Not really.
Much of what you’re preparing for doesn’t even exist yet. We hope it doesn’t exist: don’t we educate students in the hope that they will make the world better by changing it? By creating new realities?
Doesn’t that mean education is impossible? Not at all! Because we’ve learned over time that there are kinds of learning that help people prepare for an unknown and unknowable future world. Not just specific skills or subjects, but kinds of learning: approaches rooted in curiosity, exploration, seeing closely, questioning, critical examination, taking multiple perspectives, using multiple kinds of tools, synthesis, communication, dialogue, relationships.
We will often only understand our formal learning in hindsight
Is a Religious Studies course “for” a software career? Well, is a Computer Science course “for” a software career? That Religious Studies course applied to my software career in exactly the same way that my Algorithms course applied: I rarely use (and have largely forgotten) the specific knowledge from it; I use its approach, its patterns of thought, constantly.
I want to copy paste everything in this article.
There is always a tension in education between teaching the knowably practical and the unknowably valuable.
So what is a liberal arts education?
learning that will be valuable in unknowable ways in an unknowable future
To be clear: both are necessary (vocational and liberal arts educations).
having no access to vocational education is as damaging as having only access to vocational education
What does liberal arts even mean? It means free as in self-determining.
If a person lives a life of servitude, if they are enslaved, don’t they need only vocational education? If their human existence has no utility beyond their job, if they cannot shape their world or create new paths through it, then why do they need anything but immediately practical skills? Why teach them things we know they don’t need? Isn’t it only free, fully privileged, self-determining people who also need a liberal arts education?
Think about that. Think the mixture of elitism and derision with which our society views “liberal arts” today. Think what that says about how we view human beings.
Damn, that burns. Especially in tech.
Then later:
The term “liberal arts” came from a world where servitude and slavery were the norm, where the power structures of society worked to limit self-determination and world-shaping to a select few. I wonder how different that world really is from ours.
It’s liberal arts as in liberation. Economic advancement is not liberation.
I cringe, cringe deeply, to my core, when people try to create socioeconomic mobility by force-pushing tech and STEM and give-them-lucrative-careers content into schools...because at its heart, this push is about meeting employer needs, not human needs. It is asking students to conform to the world, not to reshape it.
Later:
I don’t think that increasing the size of the STEM labor pool by shoving marginalized kids into it ends marginalization. I think it lowers labor costs for employers who are sick of paying people so much money to write software.
It’s the “shoving” that’s the problem. What should be at the center is fostering a person’s curiosity. This phrase sticks with me.
My curiosity changed my life.
career outcomes are [not] the ultimate justification for the utility of education:
Discovered via notes.billmill.org
We collaborate on everything because of each person’s unique background and skills. While each person can execute independently, we’ve found that our best work is done through collaboration. None of us is process/tools-driven. We care about the outcome, not the process/tool.
I kinda like this nebulous definition and approach to “design engineer”:
Interested? We’re hiring Design Engineers. Apply even if you’ve never had the Design Engineer title. As you’ve read, there’s no correct background or skills for a Design Engineer.
I like that — “there’s no correct background”. Just a desire to build a great, coherent product. That means considering all cross-disciplinary elements: color, typography, code, performance, accessibility, copy, graphics, storytelling, and more. Knowing and appreciating all of these — and understanding who on your team can help you deliver each — is an essential part of the job.
We lose interest as soon as we realize that what we read never had any intended meaning.
If you believe in the idea that human beings are endowed with a kind of intrisic value that makes us — our “intelligence” — different, than part of what makes writing valuable is the human being who wrote it, not the probability machine that made a word smoothie.
Why should I care on this side when no one cares on the other side?
Also, I like this economic parallel of inflation hitting the digital world:
Technically, crisp, well-made videos are expensive. They take a lot of time. They require a lot of people in the creation process. They cost a lot to produce. They look expensive. The outlook of being able to make technically high-quality videos in just a few seconds is attractive. But foreseeably, the very same inflation that has hit AI images will hit AI video...And then they will start to devalue what is connected to it.
[ai images and video are] like everything cheap and easy, they are losing their creative and economic value at the same pace as they have become ubiquitous
Computer-generated videos impress at first sight, but soon they will follow the development of computer-generated text and images and carry close to zero stylistic, economic, or creative value until they become a liability and will cost more than they add, as they drag everything around them down.
I’d strongly recommend other product teams consider automatically deleting employee accounts
Wait, what is this self-induced chaos? It’s wild, but…I kinda like it.
Graphite aims for fast, bug-free, and painless onboarding. The best way for us to ensure this is to suffer through onboarding once every day ourselves. Across our full Eng-product-design team, any individual only gets deleted once a month on average. But one teammate a day hitting a sharp edge has proven enough to find and motivate fixing issues.
Reminds me of Robin’s advice: to build a great product, you just have to use it. Over and over and over. There is no substitute.
(via Eric Bailey’s newsletter)
What’s strange about this article is that, if you generalized it, it feels like it could be the history of the internet for the world at large.
[They were] hooked up to the internet…only to be torn apart by social media and pornography addiction.
Like these quotes from people in the tribe:
When it arrived, everyone was happy…But now, things have gotten worse.
Some young people maintain our [ways]…Others just want to spend the whole afternoon on their phones.
It’s like the internet as a 21st century version of manifest destiny…
First let’s point out the elephant in the room: an analog approach will be more limited functionally. That’s why most screens exist today, whether it’s on your smart fridge or your EV car: screens are popular as you can cram dozens of small buttons on them. But as we said before, this is mostly a production concern, it ignores the many ergonomic, safety, and even sensual aspects of physical controls.
Why is everything becoming a digital screen? Because screens are easier. They allow you to be lazy and not make trade-offs.
Engineers tend to think mathematically while Designers tend to think statistically. Both are correct, they just have different goals. If your ONLY goal is to cover every possible feature, then having a touchscreen is reasonable. However, if you think statistically and ask “What do people do 99% of the time?” you get a much different answer.
Kinda wish more software took this line of thinking.
just writing down notes is all that really matters. Any tool that allows you to compose and save text will do. It is the act of writing, not the act of linking or reading or revisiting, that clarifies thought and leads to insight. The rest is all superfluous.
Agree 100%. Writing is the act of refining your thinking. Notes are the byproduct.
What I love, more than anything, is the quality that makes AI such a disaster: If it sees a space, it will fill it—with nonsense, with imagined fact, with links to fake websites. It possesses an absolute willingness to spout foolishness, balanced only by its carefree attitude toward plagiarism. AI is, very simply, a totally shameless technology.
It’s a shame how shamelessness is increasingly rewarded in the world — pun intended.
a goal cuts off avenues for exploration. It welds you to what past-you wanted, effectively disenfranchising present- and future-you. That is, in fact, the point: by committing to a goal, you commit to not wandering off your chosen path. But that means that when things show up along that path—new opportunities or new knowledge, or simply changing conditions—you lack the license to veer off or head in another direction.
That’s me. I don’t go where I planned, because I didn’t plan anywhere, but I do get somewhere.
I like to talk about intentions rather than goals. An intention, as I’m using it here, is a kind of bending of the self towards something, a commitment not to a specific path but to a scope of attention or way of being...Instead of something you might achieve, it becomes something you do; instead of someone you could be, it becomes someone you are.
As a non-goal oriented person, I approve of this message.
a company that once proudly served as an entry point to a web that it nourished with traffic and advertising revenue has begun to abstract that all away into an input for its large language models.
This new approach is captured elegantly in a slogan that appeared several times during Tuesday’s keynote: let Google do the Googling for you. It’s a phrase that identifies browsing the web — a task once considered entertaining enough that it was given the nickname “surfing” — as a chore, something better left to a bot.
This is what I said: when did surfing become doomscrolling?
Whatever labor funded the production of knowledge that it refers to goes unmentioned, and whatever sources it relies on go uncredited.
Google at last has what it needs to finish the job: replacing the web, in so many of the ways that matter, with itself.
It’s kinda wild when you think about it. Google’s search engine now has an option to search the “web”, which is not the default anymore, and that kind of leaves you wondering: what am I even searching anymore?
You don’t have to take my word for it though. Accept it as a hypothesis, and give it a shot.
I want those words to preface everything I publish.
Include design earlier. Include engineering earlier. The split is the failure.
I often feel like I'm ping ponging between these two scenarios:
- "We would have caught this sooner if engineering was more involved in the design process".
- "We would have designed this differently if we knew engineering couldn't handle it"
At what point do we stop and realize that we are setting our teams up for failure?
Also this on looking for a job:
Am I going to find a place that will allow me to actually utilize all of this web design knowledge I have? Or, am I going to just be a JS engineer who spends most of my time configuring pipelines or doing ops work? I would say that most larger organizations favor the latter and live with the gaps in design.
That feels true to me. Orgs favor folks at the back-of-the-frontend and live with the gaps that result from that choice, hoping that when it comes to the front-of-the-frontend they’ll “figure it out as they go”.
Jerry says he prefers a yellow legal pad over typing on a keyboard. He calls a keyboard "too corporate" and says he likes how contrary the legal pad is in this way.
To be creative, you gotta feel like you're getting away with something.
I like that. I'm gonna ask myself that on personal projects: “What aspect of this makes me feel like I'm getting way with something?”
That's fun.
You can use any attributes you want on a web component…I’m a little nervous about this. What if HTML ends up with a new global attribute in the future that clashes with something I’ve invented? It’s unlikely but it still makes me wary.
So I use data-
attributes…[and] the browser gives me automatic reflection of the value in the dataset property.
Wait, what? Why am I not doing this?
Instead of getting a value with this.getAttribute('maximum')
I get to use this.dataset.maximum
. Nice and neat.
Smart.
Everything Manuel says here about self-promotion is how I feel.
the correct approach seems to be…You have to constantly remind people to like and subscribe, to support, to contribute, and to share.
Sometimes, all I want is to comment on someone’s post to say « lol » or « nice thanks for sharing » or « saaame! » and that’s not something that warrants a whole blog post and entry in my RSS feed. No, I don’t want to email you or send you a webmention to say « hehe this was funny ». Some things are funny, but not « share on my platform » funny...I feel a lack of connection on the indieweb and it sometimes makes me sad.
This is part of why I enjoyed Twitter — and it's a big part of what I still enjoy Mastodon.
I want a place for fun, ephemeral conversation. Stuff that doesn’t merit a blog post. And I don’t want to manage, host, or archive that conversation. If it disappears tomorrow, that’s fine. I’ll find a new place for ephemeral conversation.
This is me. 95% of my music listening still favors listening to full albums.
My wife made the comment that I'm one of the few people that still listens to albums and I think there's a lot to be said for respecting the artist's intent and effort that went into sequencing the release.
Come to think of it, AI is a playlist but of words. Everything out of context, mashed together.
At no point are these men actually providing leadership. They’re just in the leader class…
Kind of an interesting distinction: there are leaders who provide leadership, and there are leaders who are in the leadership class but do nothing of the sort.
Also, I loved this anecdote from somebody who works in the film industry:
“You get [executive] producers that say, ‘I want to be involved in the artistic process,’ and you’re like—I don’t ask to look at your spreadsheets, man.”
People are like fields: you gotta intentionally let them lie empty producing absolutely nothing for a little while:
You see, one of the most important things, which is true of both plants and people, is that you’ve got to let them lie fallow for a while. A field cannot support the same crop forever. You must intentionally let the field lie empty, producing nothing…
People are the exact same way.
whenever attempting any effort with other people, prioritize building trust and respect for each other over and above any other goal. The trust forms the foundation from which the work can grow.
This looks great. I may just need to get this book.
if we want to build cultures where productive disagreement can happen…we have to first establish and nurture that trust and respect. Otherwise we’ll be too busy being right to get around to learning something new.
That’s the thing that’s interesting about open source. Instability and chaos make it stronger. On the other hand, that instability is really dangerous for companies that are trying to extract value because it becomes very slippery and brittle.
Big company: why are you rocking the boat?
Community: we don't mind rocking the boat because we’re not a giant ocean liner.
A big company is like a giant ship that has to start making its turn a mile before the actual turn itself. Whereas the community is comprised of thousands of small, distributed boats that can make (in contrast) hairpin turns.
Any experienced programmer worth their salt will tell you that producing code — learning syntax, finding examples, combining them, adding behaviors, adding complexity — is the easy part of programming.
The hard part: “How can it break? How will it surprise us? How will it change? Does it really accomplish our goal? What is our goal? Are we all even imagining the same goal? Do we understand each other? Will the next person to work on this understand it? Should we even build this?”
Spot on.
“Do we even collectively understand each other and what we’re doing?”
That’s a hard one. It’s hard enough for the people building the software to figure that out together, let alone the people building it and the people using it to figure it out together.
I’d even bet $5 that the [MSNBC] “native app” is actually a bunch of web views in a trench coat.
This is my new fav saying:
So many "native apps" are just web views in a trenchcoat.
To celebrate his 40th birthday, Michael Flarup put together a list of 40 things he’s learned. A lot resonated with my own experience and reflects the same kind of advice I would give. Just a few:
Stop chasing perfect. Go for good and iterate
Try not to please everyone with the things you make. It’s your work. Make it reflect your taste.
Creative work doesn’t follow a straight line from not-good to good.
You become what you work on. The world has a tendency to feed you more of what you put into it, so make sure it’s something you like.
Showing up everyday and moving the ball, even just a little, almost always wins out over bursts of burning the midnight oil.
Find your sources of energy and wield them as tools.
Paul Ford, as ever, has the right words:
logging into anything with Adobe is essentially a statement on how little they care about human beings.
Lol.
if you talk to a designer... about [Adobe] one of the first things they’ll say to you is like, “I don’t even understand Photoshop anymore.” It’s like their girlfriend is now a scientologist.
Max on why we should ban margins at the component level:
Margin conflicts with how designers think. Designers think about space in relation and context. They define how far a component should be from another component in a specific instance.
By banning margin from all components you have to build more reusable and encapsulated components.
Makes sense to me. It’s all about relationships between things. As Matisse said, “I don’t paint things. I paint the difference between things.”
A computer is a general-purpose device that happens to run games. It’s that general-purpose-ness that expands what’s possible, and that’s something I value a lot.
Lot of good stuff in here about consoles vs. computers for gaming that can be generalized to interfacing with computers.
Also loves this comparison: consoles are akin to walled gardens, computers are like the web.
Similar to smartphones, gaming consoles are polished, user-friendly, walled gardens that guide you down a pre-destined path. Gaming on computers are more like “the web.” Open. Expansive. Chaotic.
Code review goes deeper than just checking a pull request for mistakes. It’s an important aspect of doing software engineering as a team
This was a well-written articulation of the value of code reviews beyond the veneer of code “quality assurance”.
Some of my favorite parts about code reviews come from 1) what I teach myself in preparing them, and 2) what I learn from others who do them.
Code review gives the reviewer a chance to share knowledge with the author.
it's our fault. Our as a society. We celebrate when Apple becomes the first trillion-dollar company but we don't celebrate when someone says "You know what? I think I have enough".
If you’re the richest company in the world and you can have anything, what’s the one thing you want? More.
Decision making is what slows down most teams. The endless slide decks, the pitch to leadership, the lack of trust in what they’re building. They’ll go round and round in big circles trying to convince everyone in the entire company that this is the right thing to do.
Yup. Been there
the hard work should never be the bureaucracy
Nailed it
A lot of this vibes with my experience.
The way Tailwind actively pushes against making hasty abstractions is — really — the smartest thing about it. In my experience, when you’re building something new you’re better off making something functional quickly and worrying about code elegance and deduplication and abstractions later, when you’re hopefully still in business. With a little practice, in a Tailwind project it’s relatively easy to get into a just-building-the-thing flow state. I get to shove the part of me that frets about good naming and specificity and leaking styles and efficient reuse into a drawer for a bit. It’s kinda nice.
As with anything, it’s tradeoffs all the way down.
First, Tailwind’s build tooling lets you define new classes on the fly in HTML. This can be relatively harmless like defining a one-off margin length. Or it could be like above with, sm:py-[calc(theme(spacing[1.5])-1px)] where you’re involving media queries, accessing Tailwind’s theme values, then doing math to make a one-off length and OK now admit we’re just writing CSS but doing so very awkwardly
That’s the point I often get to when using Tailwind. “Ok, can we just admit to ourselves we’re just writing CSS now, but awkwardly?”
Via Eric’s newsletter.
Have to constantly remind myself of this too:
the goal of a book isn’t to get to the last page, it’s to expand your thinking.
And not just with books. Any form of content consumption (or experience, for that matter).
As quoted in a tweet, this is Katherine McCoy’s introduction in Digital Communications Design in the Second Computer Revolution by Stephanie Redman.
Written in 1998, it’s a perfect description of what it means to be a web “designer”:
This environment requires a much different visual design strategy than that of the traditional perfectionist designer. What are the implications for graphic designers trained in the modernist traditions of clarity, formal refinement and professional control? We can no longer think of our work as the production of as precious perfect artifacts, discrete objects, fixed in their materiality. The designer is no longer the sole author, realizing one's own singular vision. This forces a reordering of our design intentions. The designer is an initiator, but not a finisher, more like a composer, choreographer or set designer for each audience member's improvisational dance in a digital communications environment.
Or maybe even a director like in film.
The exploiter simply hears music, sees the reaction the music has on other people, may have no real idea why that music is good, but they try to mimic the circumstance that created the value from the music.
The exploiter simply sees the web, sees the reaction the web has on other people, may have no real idea why a website is good, but they try to mimic the circumstance that created value from the website.
Paul Ford commenting on the “marketplace of ideas” and the “global town square”:
you think that you’re delivering ideas, debate, and philosophical exchange. You are not...What you are delivering, always, is validation. You can’t escape this when you are making content. People consume the content, seeking validation in context. No one organically seeks out conflict and pressure against their carefully constructed belief system.
We thought we were getting a global space to discuss ideas, allowing the best ones to rise to the top for everyone’s benefit. What we got was the exact opposite: a global space where nothing can be discussed.
it’s bizarre that we thought that this could be the global town square, because it’s actually the opposite. It’s a system for keeping ideas out of the commons because it’s too intense and too emotional. And so what you end up with is everybody having conversations in the group chat back along their ideological lines [and pointing to stuff out in the global space] and going, “that’s a nightmare”.
via Stefan Judis:
Treat beginnings like endings: celebrate them, document them, let someone else pick up where you leave off...By ending well, you give yourself the freedom to begin again.
See my post about quitting.
Simone touching on the phenomenon of presenting ourselves as a product on our “personal” websites with phrases like, “I’m Jim Nielsen. A designer, developer, and writer with 20+ years experience on the web.” Boring! (I need to fix my home page.)
defining myself through job roles, awards, or the fact that I might be a public speaker, is good for a resume. Anywhere else, it becomes deeply uninspiring and uninteresting. That is what I might be doing for a living right now, but it doesn’t represent who I am as a person with values, interests and priorities.
There’s utility here for sure, but there’s also a painful realization that you’re simply commoditizing yourself.
I don’t “help companies” achieving something, I work for a salary doing things that, when luck strikes, I might even enjoy.
Here’s Barry Hess:
I try to imagine what my life would look like if I was stuck with only the relationships geographically close to me. I have those relationships as well, and I treasure them, but they simply cannot offer the diversity of thought, background, and experience that digital relationships allow. I’m so incredibly thankful to live in an era where I can have the best of both worlds.
This was via Manuel’s post along with his commentary:
The global nature of the web is an underappreciated quality. Like, can we just stop for a second and appreciate the fact that I’m typing this while sitting in Italy and you’re reading this somewhere else on the globe? It’s fucking amazing.
Amazing indeed. And I couldn’t agree more.
We need to be very careful about justifying bad experiences with "perfect is the enemy of good" when we should be striving harder for good itself. The popularity of this expression is responsible for a lot of garbage. Sometimes we just shouldn't do the quick and easy thing—even if we've tricked ourselves into thinking it's temporary. Exists is also the enemy of good.
Great counterpoint to the popular refrain. My experience resonates with this idea that people are often less willing to invest in a good solution when a mediocre one already exists.
My recent post about filter icons resulted in quite a few reactions, most of them telling me things I already knew. I struggle with people implying I haven't thought about what they're now kindly offering me as new information.
Yeah been there. What do you do?
Letting things go always feels better than trying to win something that isn't even a competition.
Good point. But does it influence your writing?
I would lose my voice if I tried to optimize my writing around the expected reactions. I'd dumb everything down, argue in every direction, just to be safe. That's just not an option.
I love hearing from and writing to people who read my writing and engage in a good faith conversation.
But sometimes I also just have ignore others.
Quoting Tim Berners-Lee:
One of the beautiful things about physics is its ongoing quest to find simple rules that describe the behavior of very small, simple objects. Once found, these rules can often be scaled up to describe the behavior of monumental systems in the real world. […]
If the rules governing hypertext links between servers and browsers stayed simple, then our web of a few documents could grow to a global web. The art was to define the few basic, common rules of “protocol” that would allow one computer to talk to another, in such a way that when all computers everywhere did it, the system would thrive, not break down.
Turns out tags, folders, comments, stars, hearts, upvotes, downvotes, outliners, even semantic triplets, they’re all just links.
color spaces are all constructs. People just make them up! Useful ones are constructed in order to do useful things, but there’s no, like, One True Fundamental Color Space.
There’s no “right” color space. Only useful ones:
colors – like tastes, touches, and smells – don’t have any kind of innate geometric relationship to each other. When we arrange colors around a wheel, or set them into any other space: we did that.
All of this is to say: color spaces can’t really be “right” (or “wrong.”) They can only be useful.
Why you’ll want oklch:
Oklab is pretty good!
People tend to think about color in terms of three variables: lightness, chroma, and hue.
Oklab does a good job of isolating these variables, but in order to use them, we have to navigate it using polar coordinates instead of rectangular ones. When we navigate Oklab this way, we call it OKLCH.
The visualization on of polar coordinate referencing was eye-opening.
Everything about this article satisfies my long-running desire to have someone write a piece on color spaces I can understand.
Maybe you’re a YC startup and you get some credits to get you started, but that’s definitely the drug dealer model [where] the first one’s free
Lol, drug dealer model. Love it.
We’ve managed to turn text notes into closed, proprietary formats that are specific to a single piece of software. That’s a bummer. You can’t sync your Apple Notes to Dropbox.
It really is a shame how locked-in silos our data is these days. Oh and the privacy around that data isn’t too great either.
Why is the default that, when I use software, the product managers and support people can just read my data whenever they want?
File over app is a philosophy: if you want to create digital artifacts that last, they must be files you can control, in formats that are easy to retrieve and read. Use tools that give you this freedom.
the files you create are more important than the tools you use to create them. Apps are ephemeral, but your files have a chance to last.
The web equivalent is: the websites you create — their content and functionally for end users — are more important than the tools you use to create them.
What your site is built and served with is ephemeral, but the content is what has a chance to last.
Really interesting writeup on the history of robots.txt
(I’ve written previously about my feelings around robots.txt
and AI bots).
the main focus of robots.txt was on search engines; you’d let them scrape your site and in exchange they’d promise to send people back to you. Now AI has changed the equation: companies around the web are using your site and its data to build massive sets of training data, in order to build models and products that may not acknowledge your existence at all.
The robots.txt file governs a give and take; AI feels to many like all take and no give.
That’s the problem: the incentives for an open web are quickly dwindling.
In the last year or so, the rise of AI products like ChatGPT, and the large language models underlying them, have made high-quality training data one of the internet’s most valuable commodities. That has caused internet providers of all sorts to reconsider the value of the data on their servers, and rethink who gets access to what. Being too permissive can bleed your website of all its value; being too restrictive can make you invisible. And you have to keep making that choice with new companies, new partners, and new stakes all the time.
Great ending:
[the creators of robots.txt] believed that the internet was a good place, filled with good people, who above all wanted the internet to be a good thing. In that world, and on that internet, explaining your wishes in a text file was governance enough. Now, as AI stands to reshape the culture and economy of the internet all over again, a humble plain-text file is starting to look a little old-fashioned.
Brian Lovin talking about the shift from email et al. to Slack:
But this shift came with a cost: as the friction to share fell, the quantity of shared things skyrocketed. As quantity skyrocketed, it became harder to find the signal in the noise. It gave us the tools to think out loud in front of hundreds of people, molding incomplete thoughts and ideas one push notification at a time.
Sounds like AI on the public internet. But I digress. Back to Slack:
[Slack is] a tool that lets anyone distract everyone with each new message.
Damn this is on point.
Ah, look how much work I am getting done! you think to yourself as you bounce uncontrollably between a dozen threads and channels of varying importance.
what lies between me and having done a good job is pushing stuff around in Figma until what I think should be visible [to the person implementing it] is actually visible. It feels like a chore.
I personally believe that software is never built with more attention to detail than when it’s coded by the person who designed it.
I agree.
You’ll never prevent all of them. There were visual bugs in production before you started and there will be visual bugs in production long after you leave. But sometimes they’re almost as quick to fix as they are to document and explain, so it can be effective to do it yourself when you spot them.
I’ve found this to be true too. It often takes more time to document how something should be implemented and get it into the schedule, than to just do it yourself.
Metrics are trailing indicators of qualitative improvements or degradations you’ve made for your customers… they are not the point of the work.
The issue with saying “a picture is worth a thousand words” is it sets up a false battle between words and images...
Letting images and pictures compete for supremacy reduces the complex relationship between images and words into a direct, quantifiable comparison.
Ok, sold. I need to stop using that phrase. One is not better than the other. It’s not an either/or. They both have roles to play, strengths and weaknesses, and they reinforce each other.
While images can instantly evoke feelings or set a scene, they lack the specificity and explanatory power that words can provide. While words can be precise and informative, they might not capture the immediacy or emotional resonance that a well-chosen image can deliver.
Instead of pitching images and text against each other, we need to learn when to use which, and how to use both images and words to strengthen each other...
Images and words are different forms of language. One can express that which the other cannot.
To summarize:
The most powerful combination of text and image happens when the text says about the image what you can’t see at first sight, and when the image renders what is hard to imagine.
Do your images add meaning? Or are they merely decorative?
Ironically, the phrase “an image is worth a thousand words” is conveyed via words and not an image.
AI images make your audience think: “If they use cheap AI for images, they probably use it for the rest, too.” It raises questions about the authenticity of your content
Yeah, not gonna lie, I make these judgements.
The question is: Do you always need an image? The use of images shouldn’t be an unconditional “always.” Instead, it should be a deliberate choice, driven by the specific needs and objectives of your content.
Don’t depend on images to do your job of writing.
A talk by James Long at dotJS 2019 around “local first” software. A few things that stood out to me:
- “Local apps” are fast because the computing takes place on your local device instead of a back and forth between client/server.
- “Local first” takes away all your performance concerns (that stem from having to use the network) because it simply takes away the network — and that's a whole swath of performance concerns.
- Because there’s no network, your app is only bound in performance by its computational (and memory) limitations.
- A whole swath of security concerns disappear if your app is local. As James notes, “You can’t SQL lite inject yourself.”
Clive Thompson on the clickbaity YouTube thumbnails phenomenon:
[they] exacerbate what I've come to think of as the "spit-take-ization" of the Internet: The assumption that the only way to get our attention is to promise we're about to see something so Xtreme you just won't believe it dude.
If you're just going by the data, then you're confirming, not deciding.
Building software, we have this fascination with making “objective, data-driven” decisions. But it’s an illusion. Data may filter out some subjectivity, but what’s left is always subjective. It’s best we admit this to ourselves and move on.
I’d rather a human with good taste than a computer with a good algorithm.
I think the honest answer is that most people can’t gain perspective and moderation and maturity by reading someone’s advice online. The wise 35-year old dads on Twitter can follow their own advice about work-life boundaries because they’ve suffered the consequences. There’s no shortcut to perspective: you have to acquire it by experiencing bad things and suffering consequences.
There’s no shortcut around experience — tech will never solve that.
Advice on the age-old debate about over working:
If you can figure out the difference between busy-work that only benefits your employer, and the kind of work that makes you as a person feel like you’re making progress and becoming more skilled, then you’re ready to learn.
The perspective being: it’s not about work or not work. It’s about cultivating your enthusiasm and following your curiosity. If you’re doing that, it’s not work. And you take the results with you anywhere you go, at work or not.
Bundling is a technique, not an edict.
when I say “don’t bundle” I’m not saying “don’t ever bundle”. I’m saying don’t start with a bundle. Maybe grow in to that need. Prove you need it.
What you hear Brian say multiple times on the podcast is: measure, observe, improve (based on data) — he’s basically advocating the scientific method, lol.
The state of the art in the browser is exceeding the state of the art in practice.
As far back as I can remember working on the web, that has never been the case. Fun times!
integrations suck...[they involve] reading a bunch of docs, getting confused for a while, and writing some tedious, probably difficult to debug, code.
Yeah, that ’bout sums it up.
Paul Ford:
the entire structure around social media and how we interact and how we talk today tells people that they have this intense power and voice and this ability to affect change that they don’t really have...
we jam this intense moral pressure on everyone. It’s like, if you don’t try hard enough to change the world, you have failed everyone.
Really loving Paul’s writing for The Aboard Newsletter (I mean, he is a professional writer).
If you don’t know what product is being sold, you are the product. If you don’t know what business you are in, you are in advertising.
I’ve not heard the “you are in advertising” line before. That’s brilliant.
So much today is focused on onboarding zillions of people and squeezing them like citrus. But more people, more problems.
As an introvert, I agree: mo‘ people, mo’ problems.
Software should be good for one person, good for two, good for ten, and then after that, it should take a hard look at itself.
People don’t think in software features. People see a tool, and they want to get something done. As soon as that something gets done, they’re done with the tool. Sure, the nerdy product manager may love to poke around the entire product, but for the majority of people, software is mostly annoying. It’s a thing they negotiate with to get what they want.
People don’t invest their time in software. They mostly tolerate the little walking tours (AKA onboarding) and sales pitches (AKA upgrading) to get to the thing they want out of it.
where have all the websites gone? Well, the people who make them have all gone to war for the capitalist machine. They grew up and got jobs. A natural part of growing up. Silos came and plucked their voices. Invasive memes and short form content grew in their place. Hustle overtook leisure. Harassment overtook openness. Influence overtook creativity. An economy of interestingness replaced by one of followers, likes, and engagement metrics.
Well damn. That cuts deep.
Whenever a problem can be solved by native HTML elements, the longevity of the code improves tremendously as a result. This is a much less alienating way to learn web development, because the bulk of your knowledge will remain relevant as long as HTML does.
Paul Ford on the Aboard podcast:
When you present yourself purely digitally, you commoditize yourself. No matter how much you think you’re not, you are a two-dimensional rectangle.
I’ve found that many people who know me in “real life” think my personal website is weird.
Like why would you have your own personal website? That’s weird ha.
As my aunt Grace, who lived in the Ozarks, put it, “I get what I want, but I know what to want.” - The Joy of Being a Woman in Her 70s
Via Eric’s newsletter:
Does growing plants make me a better designer? Probably not, but that’s not why I do it. I do it because it brings me joy and because, in some abstract but deeply felt way, both activities tap into the same well of satisfaction and challenge.
The same could be said for working on your personal website, which reminds me of this quote from the article:
Interacting with plants offers beauty, comfort, homecoming, and grounding. It may heal that which is broken. — DeJong, 2021, p. 25
Again: working on a personal website. It can heal what is broken.
But beware: it may also break what has been healed.
This post was full of great blogging advice — in my opinion.
Common question: “What should I blog about?”
You ask yourself: What would have made me jump off my chair if I had read it six months ago (or a week ago, or however fast you write)? If you have figured out something that made you ecstatic, this is what you should write. And you do not dumb it down, because you were not stupid six months ago, you just knew less. You also write with as much useful detail and beauty as you can muster, because that is what you would have wanted.
Also, I liked this analogy of the information on the internet flowing like the waters on the earth.
The social structure of the internet is shaped like a river.
People with big followings, say someone like Sam Harris, is the mouth of the Mississippi emptying into the Mexican Gulf. Sam has millions of tributaries. There are perhaps a few hundred people Sam pays close attention to, and these in turn have a few hundred they listen to—tributaries flowing into headwaters flowing into rivers. The way messages spread on the internet is by flowing up this order of streams, from people with smaller networks to those with larger, and then it spreads back down through the larger networks. Going over land, from one tributary to another, is harder than going up the stream order and then down again.
It is interesting, isn’t it, that these supposedly deeply considered philosophical movements that emerge from Silicon Valley all happen to align with their adherents becoming disgustingly wealthy
Funny that.
I like this observation about the innate compassion induced by the form of blogging (especially in contrast with social media):
In his early drafts [George Saunders] says he’s inclined to be initially sarcastic and throughout his editing process he tries to be more specific and less boring. George noticed for his writing this has a tendency towards love and compassion. I want to emulate that as much as possible…I can be sarcastic myself, so I try to be eager to give grace to anything I’m writing about. I am not perfect at this. It takes practice and iterations (rewriting!). The “slow” speed of making a blog post gives more space for this. And social medias have a strong tendency away from that compassion.
The topic page on the Remix docs:
Performance: While it's easy to think that only 5% of your users have slow connections, the reality is that 100% of your users have slow connections 5% of the time.
Resilience: Everybody has JavaScript disabled until it's loaded.
How Remix works:
Remix embraces progressive enhancement by building its abstraction on top of HTML. This means that you can build your app in a way that works without JavaScript, and then layer on JavaScript to enhance the experience.
What it means for you (and your users):
It's not about building it two different ways–once for JavaScript and once without–it's about building it in iterations. Start with the simplest version of the feature and ship it; then iterate to an enhanced user experience.
Paul Ford:
And after building software for collective decades, I think everyone would say: It’s still really freaking hard. Deadlines slip. Requirements change. Brilliant ideas turn out to be dumb. You change your mind and throw work, money, and time away.
it wasn’t Anna Indiana or any other Silicon Valley attempt at culture that brought anyone pleasure in 2023—it was real artists like Taylor Swift and Beyoncé, who took over the world with these perfectly crafted mega-spectacles that pulled millions of people into stadiums and movie theaters.
But what I wish I could get across to our friends in the Palo Alto area is that the last 20% is really, actually, hard—not just in tech, but in writing, music-making, carpentry, middle-school teaching, cobbling—it’s a grind. Sometimes for dumb, bureaucratic reasons, sometimes because it’s just hard to make things happen, inside or outside of the computer. And everything happens—all the growth, profit, promotions—in that last 20%. That’s when human connection happens.
If you do work that is hard, kind of a grind sometimes, and involves lots of little and small decisions, I think you’re pretty safe for a while.
Just as most jobs are harder than people think, most things in life are nowhere near as generic as people think.
Using “Re:” is a slippery growth hacker move to get my attention
Some real shady stuff here. A great reminder:
The owner of every web extension you install wades through these kinds of emails [offering them money for their userbase], and can be tempted by them.
Gall’s law:
A complex system that works is invariably found to have evolved from a simple system that worked.
A complex system designed from scratch never works and cannot be patched up to make it work.
You have to start over with a working, simple system.
Baldur’s commentary:
I always advocate [trying] to bake evolution into your design. It’s fine to have a complex system design that you’re aiming for in the long term, but you need to have a clear idea of how it will evolve from a simple system that works to a complex system that works
This post was full of great blogging advice — in my opinion.
First common question: what should I write about?
You ask yourself: What would have made me jump off my chair if I had read it six months ago (or a week ago, or however fast you write)? If you have figured out something that made you ecstatic, this is what you should write. And you do not dumb it down, because you were not stupid six months ago, you just knew less. You also write with as much useful detail and beauty as you can muster, because that is what you would have wanted.
Don’t think anyone will be interested in that?
Luckily, almost no one multiplied by the entire population of the internet is plenty if you can only find them.
Finally, I really loved this analogy of the social structure of the internet being shaped like a river:
People with big followings, say someone like Sam Harris, is the mouth of the Mississippi emptying into the Mexican Gulf. Sam has millions of tributaries. There are perhaps a few hundred people Sam pays close attention to, and these in turn have a few hundred they listen to—tributaries flowing into headwaters flowing into rivers. The way messages spread on the internet is by flowing up this order of streams, from people with smaller networks to those with larger, and then it spreads back down through the larger networks. Going over land, from one tributary to another, is harder than going up the stream order and then down again.
A writing app that thinks for you is a robot that does your jogging.
Redesigning your personal website is one of life’s great pleasures.
This line is more powerful when you hear it in the talk, as it’s a bit confusing when written down, but I still love it:
[Breaking changes] are unplanned work, which means you have to do work that you didn't expect to do in order to keep building the thing you already built that worked.
In the end, companies who operate with a shared belief in realizing inclusion and equity in their products (as well as their processes and organizational makeup) end up making better stuff than companies who want to tell good stories about inclusion, without all the heavy lifting. The former create a culture that leads to higher-quality products. The latter make great ads.
Also, this post contained a reference to this intriguing exchange from an Apple shareholder meeting:
“When we work on making our devices accessible by the blind,” [Tim Cook] said, “I don’t consider the bloody ROI…If you want me to do things only for ROI reasons, you should get out of this stock.”
Some organizations see lines of code as a productivity metric, but I see them as a liability metric.
Also, lol, this great line from Dave is my new fav saying when it comes to doing just about anything in a codebase:
I [could do it], but the juice would not be worth the squeeze
And I love this idea: if only computers made noise, so when you made them work better they got quieter and less annoying and people actually noticed.
I think collecting metrics are a nice way to grease the wheels of your organization and show progress on work that is essentially invisible. If I do a great job, no one will notice but it will be less noisy or potentially faster. A part of me wishes computers made a physical noise (beside Slack notifications) whenever they were having a bad time, then it’d be more obvious to everyone when code needs fixing. You wouldn’t have to convince anyone to pay down some technical debt because everyone would be yelling “Can you please make that fucking computer stop squeaking?”
Learnability is not an intrinsic property of a UI; it is a function of the context (cultural and otherwise) in which it is used.
Makes you wonder how understandable any UI would be to someone from 50 years ago — or 50 years in the future!
(Really enjoyed this post by Lea! And this linked thread on UX StackExchange was intriguing too.)
[our] philosophy…for @linear:
Focus your efforts on the actual task on hand (building software), not on side quests (building process or management systems).
Also this is me quite often:
it's usually faster to redraw the component than trying to maintain a library
design is only a reference, never any kind of deliverable itself...The real design is the app
The job of the design system team is not to innovate, but to curate. The system should provide answers only for settled solutions: the components and patterns that don’t require innovation because they’ve been solved and now standardized. Answers go into the design system when the questions are no longer interesting—proven out in product. The most exciting design systems are boring.
Brian's take: we are in the flashy era of landing page design where aesthetics are deemed more important than substance.
There is an obsession [right now] with “bento grids” where every box has a micro interaction. And it's insane because it gets likes on Twitter — so people keep doing it because it feels good to get likes on Twitter.
Every landing page is over-invested in “What’s the micro-interaction, scroll-animation we can add here?” Instead of, “How do we explain what our product does really clearly?”
Nailed it.
In contrast, what’s Brian’s approach to his company's landing page design?
There is a series of words that I want you to read as you scroll down the page. The visuals [are supportive] and yeah you might actually linger on those for a second. But I want you to leave the landing page knowing what our product does and whether it's useful for your team.
For all its warts, the web has become the most resilient, portable, future-proof computing platform we’ve ever created
After nearly thirty years of making websites –despite being someone who cares deeply– I’m more confident in my ability to produce an inaccessible experience than an accessible one. It’s why I will advocate until the grave that making good accessible websites needs to be easier.
The more you see how other people do what they do, the harder it becomes to do things differently.
So pay attention a little, but not too much — leave more room for your own ideas than for theirs.
Maggie pointing out how the sausage is made with LLMs:
most of the training data we used to train these models is a huge data dump so enormous we could never review or scan it all. So it's like we have a huge grotesque monster, and we're just putting a surface layer of pleasantries on top of it. Like polite chat interfaces.
We’re trying to make an unpredictable and opaque system adhere to our rigid expectations for how computers behave. We currently have a mismatch between our old and new mental models for computer systems.
Ultimately, her assessment of the problem:
The problem here is we're using the same interface primitives to let users interact with a fundamentally different type of technology.
Albert Einstein admired Niels Bohr for:
uttering his opinions like one perpetually groping and never like one who [believed himself to be] in the possession of definite truth.
Basically the opposite of opinions on the internet.
Also this (from the book):
Original work is inherently rebellious.
Written in 2013:
Like it or not, JS is built into the web as an optional technology...on a lower level of architecture the web does not require JS to work.
If you think about it, at a low level even HTML is not required. It’s URLs returning data. What do you think powers all these APIs for native apps? The web!
There’s something fundamental and robust about being able to request a URL and get back at least an HTML representation of the resource: human-readable, accessible, fault tolerant.
The real test is the question “what are you willing to sacrifice to achieve simplicity?” If the answer is “nothing”, then you don’t actually love simplicity at all, it’s your lowest priority.
I think a good test of whether you truly love simplicity is whether you are able to remove things you have added, especially code you’ve written, even when it is still providing value, because you realise it is not providing enough value.
Everything about this article is so good.
One of my fav repeating jokes was how “Grug tempted reach for club when too much ___ happen but always stay calm.”
A few of my fav excerpts.
given choice between complexity or one on one against t-rex, grug take t-rex: at least grug see t-rex
sad but true: learn "yes" then learn blame other grugs when fail, ideal career advice
danger, however, is agile shaman! many, many shiney rock lost to agile shaman!
whenever agile project fail, agile shaman say "you didn't do agile right!" grug note this awfully convenient for agile shaman, ask more shiney rock better agile train young grugs on agile, danger!
grug warn closures like salt, type systems and generics: small amount go long way, but easy spoil things too much use give heart attack
is maybe nature of programming for most grug to feel impostor and be ok with is best: nobody imposter if everybody imposter
Yeah, this kinda rings true in my experience:
If you don’t promote and communicate your work, no one else will.
It’s harder to get visibility for your work than to do good work.
There’s something linguistic about CSS, even in the way it fails; when you type the wrong comma or omit a certain bit of code then CSS doesn’t care. It’ll skip over it and try and understand the next bit. Just like spoken language, where you can still understand someone if you only hear 90% of what they say.
—Sigh— vendor studies.
Isn’t it marvellous how vendor-funded studies always seem to back up their claims?
vendors have a very real incentive to want us to believe that the big problems in software development can be solved with their tools.
Your ability to iterate and evolve your software is your advantage:
A typical software system used in business, for example, will have complex rules that are usually not precisely defined up front, but rather discovered through customer feedback. And in that sense, how quickly we converge on a working solution will depend heavily on iterating, and on our ability to evolve the code. This wasn’t part of their exercise.
I accept the responsibility…[but] I myself will be unaffected.
Sometimes satire is the best way to express the absurdity of life situations.
…Oh, and if you do get bit by a shark, try to make sure it happens before your healthcare expires.
There are many, many reasons I am opposed [to A/B testing within our product] but the one we should care about is how it fosters a culture of decision paralysis. It fosters a culture of decision making without having an opinion, without having to put a stake in the ground. It fosters a culture where making a quick buck trumps a great product experience.
Yes!
one of our core values, is centered around caring about the small details, and that by its very nature is subjective. What details? Which ones matter? Those decisions all center on taste, and around someone making a subjective decision.
[people wrongly conclude] that data is only measured by the outcome of the project, rather than learned or otherwise informed by past experiences…
Data is not a substitute for critical thinking. The test said it was successful, but the outcomes, which is what you base future decisions on, showed otherwise.
So what do you do instead of relying on data? Rely on Taste. What is taste?
taste is curated through hiring, and comes from the team’s domain expertise, their diversity of background, and their learnings both building and using our product. It comes from talking with customers on Twitter, from engaging them in support tickets and on Github, it comes from direct transparent conversation.
It’s simple: spend time with customers and use the product.
We don’t have complicated ways to rank or vote on projects. Instead, we believe that if the leadership team and the whole company spend time with customers and use the product regularly, the needs and opportunities become more apparent. Then it’s just a matter of sequencing and scoping.
Just say “No!” to A/B tests.
We don’t do A/B tests. We validate ideas and assumptions that are driven by taste and opinions, rather than the other way around where tests drive decisions. There is no specific engagement or other number we look at. We beta test, see the feedback, iterate, and eventually we have the conviction that the change or feature is as good as it can be at this point and we should release it to see how it works at scale.
Linear, praised for its “design”, and they don’t even have formal design reviews!
We don’t have formal product or design reviews. It’s more ad hoc and iterative, which enables us to move faster. For example, our designers share an early design in a project-related Slack group and get informal asynchronous feedback from many different people that way.
Simple. Lean. Just take all the cruft, process, etc., away. You can always add it back — in fact, if you’re not adding it back, you didn’t take enough away.
While engineering and design have their own meetings for their functions, for the most part, we talk about “Product” holistically and not engineering, design, and product separately
This idea of thinking about product “holistically” is spot-on IMO.
There’s more, but the article is paywalled.
However, this nugget (from a screenshot) is the industry hot take I want.
Anyone who has built things knows that a lot of ideas and opportunities emerge when you actually start building something. A good example of this is when an engineer on our team, Andreas, built the context menus. We didn't ask or spec it for him to figure out how we can make sure the context menu doesn't close if the user hovers their mouse a few pixels off. He just felt it was needed. These kinds of details are almost impossible to spec or plan for if you're not there building it or if the people building it don't have ownership.
My take is that this is one of the reasons why so much of the software we have today seems fine on paper but doesn't actually work well in practice. We as an industry optimized the process too much and created a Henry Ford-style feature factory where each role is very specific and production speed is more important than craft. (The other reason is A/B testing.)
It's worth noting that there are downsides to this approach. It means that engineers and designers do have to play the role of the PM, which includes communicating, talking to users and stakeholders. This means that engineers and designers are not purely coding or designing, which can sometimes feel like it slows things down. Personally, I believe often it's better to go slow to go fast, meaning we generally get projects done right the first time, with minimal fixes later. The second downside is that it's harder to hire for these roles, as many people don't have experience working this way.
Talkers need to recognize that not everyone loves to think out loud, and that giving space for writing is part of what it means to make use of the best brains around you. Writers need to remember that writing isn‘t some perfected ideal of thinking and that making space for the messy, chaotic, and improvisational work of talking things out is often exactly what a team needs to create change.
Everything in this piece is spot on to me.
the efficiency of communication isn‘t solely a measure of the time it takes to move information from one head to another; it‘s also the time and energy required to build and sustain collective understanding.
Rich Harris on the JS Party podcast:
I think people are somewhat in denial about the costs that toolchains impose on them.
And I’m not gonna lie, I’m starting to come around to his take on TypeScript (for specific contexts, mind you):
At some point, [typescript] clicks and then you realize you’re not fighting with the compiler anymore, you’re just giving the compiler the means to help you
Earlier this year I rewrote [my app] from scratch. I corrected old bugs and design flaws in favour of new bugs and design flaws.
Lol yup, sounds about right.
The ever practical Chris Coyier
Life is full of unreviewably large choices. People move to cities because they visited once and thought the downtown was cute. They have no way of really knowing if they’ll like it there, but they do it anyway, because sometimes you just gotta make the call.
a blog should be embarrassing!
Ha, yes agree! But why?
That’s like half the point of a blog, to be wrong about things ruthlessly, over and over again, to stumble in front of a crowd of strangers and hope that they at least smile at your attempt.
When asked:
why [do] you advocate the use of multi-page web apps and not single-page ones?
Jeremy has a great response, including this bit about going with the grain of the web:
I find the framing of your question a little concerning. It should be inverted. The default approach should be to assume a multi-page approach (which is the way the web works by default). Deciding to take a JavaScript-driven single-page approach should be the exception…
When it comes to front-end development, I’m worried that we’ve reached a state where the more complex over-engineered approach is viewed as the default.
I kinda like this prioritization framework:
- Generally, prioritize company and team priorities over my own
- If I’m getting deenergized, artificially prioritize some energizing work. Increase the quantity until equilibrium is restored
- If the long-term balance between energy and proper priorities can’t be balanced for more than a year, stop everything else and work on solving this (e.g. change your role or quit)
Also more blog posts like this please:
I’ve come to appreciate that many of the folks I silently accused of malpractice were balancing context that I had no idea existed.
Also good advice (that’s also applicable to blogging):
There’s no one solution, but you’re going to accomplish less in your career if you’re so focused on correctness that you lose track of keeping yourself energized.
And lastly, I like this open acknowledgement that sometimes you have to do the thing that’s not a top priority so that you can generate the energy to do the thing that is a top priority:
It’s not only reasonable to violate perfectly correct priorities to energize yourself and your team, modestly violating priorities to energize your team in pursuit of a broader goal is an open leadership secret. Leadership is getting to the correct place quickly, it’s not necessarily about walking in the straightest line. Gleefully skipping down a haphazard path is often faster than purposeful trudging down the safest path.
A great talk from Simon Willison full of practical advice on large language models.
I liked his personal AI ethics:
I also liked his point about how LLMs have helped him redefine “expertise”. Expertise isn't knowing every API of a tool by heart — that’s trivia. Expertise is knowing what the tool can do and what kinds of questions to ask in order to use the tool effectively.
Also this line just so accurately describes tech:
It's not easy to do well, but it's trivial to get up and running.
I also thought it was interesting that he suggests running the models your own laptop because you see a lot more of the hallucinations and holes in the models (vs. the ones from corporations which try to build in safeguards) which give you a better understanding of how they work.
And over the last 3-4 years, I've probably attempted a re-design [my personal site] no less than 10 times. I would start to design in Illustrator, I would then start building and then abandon it. My biggest hinderance was wanting my website to be tied to my personal identity. I would grow frustrated with it not fully embodying me and then decide I didn't need to re-design it and then cycle would start again.
I feel so seen.
Every product can claim to make people's lives better; if you want to stand out, you must link your app to a real, immense global crisis. Try this: “Women spend more time caring for pets than men. By designing an app that controls an automated kitty litter scooper, we are freeing up women to focus on their communities and set their own agendas. WiskrSküps is critical feminist infrastructure.” Can you link your product to mitigating climate change? Improving education? Smoking cessation? Panda habitat preservation? I can, in 30 different ways. That's why I'm your boss.
Lol, Paul Ford is a gem.
Credit-hogging is an essential part of any software release, and getting good at it is what defines a true organizational leader. I always make a lot of time for it. Again: That's why I'm your boss.
He nails working in software.
While the vast majority of humans will be utterly indifferent to your announcement, you must drill in on the one or two who offer reactions that fall short of total excitement. Be sure to blow up any criticism or misunderstanding, no matter how small, into a flat-out organizational panic. Slack can be a great tool to coordinate your overreaction.
And this take on auth is spot on:
Inevitably, right away, the app's login function will break. As a society we are incapable of authenticating users. It's a tragedy, one of our greatest failings.
Dominik argues for the syntax Array<string>
over string[]
in TypeScript (or JSDoc).
I for one am with him on this argument.
Also enjoyed this point about how “it’s shorter” has never been a great argument for the use of one syntax over another:
Fewer characters to write. As if keeping code short was ever a good indicator for maintainability.
I’m confused how we all know the technology is spicy predictive text, an expert bullshitter, but we’ve said “Oh yeah, let’s roll this out everywhere and build businesses on this.”
After all these questions [about a particular feature], the team came to the same conclusion. We decided it would be best not to go through with it.
I think we could all use more decisions like this.
In order to produce a functional piece of software from AI, you need to know what you want and be able to clearly and precisely define it. There are times when I’m writing software just for myself where I don’t realize some of the difficulties and challenges until I actually start writing code.
This is very true.
I believe AI could create the software that has already been created faster than human programmers but that’s because someone figured out what that software should do
corporate authorship is…a synthesis of prior successes
Ah damn, that’s spot on.
Where Web Components shine is when your components need to go to many places. Components in a large company not only need to go to the React app, they also need to go to the Drupal site, the old Rails app, the internal Java app, the Vue app, or the static Eleventy site some intern built; the list goes on and on. Web Components offer a path to deliver components without delivering complex build toolchains, so they can more easily graft into situations where teams face a wide surface area of languages and frameworks whether through decades of decision making, mergers and acquisitions, or chasing the latest hotness.
Spot-on description of the technological challenges in enterprise.
On writing:
When it comes down to writing…what connects with people is you connecting with yourself.
On writer’s block (this is blogging):
Writer's block is not a failure to write. It's a failure to catch this feedback loop of enjoying what you're seeing and wanting to contribute more to it.
On failure:
“That didn’t work” is ok to say. But “That won’t work” is not a way to go through life.
On finding yourself through copying and imitation:
Failing to sound exactly like the person you want to sound like is a wonderful way to sound like yourself…I’m not thinking of this thing from scratch, “Ok I’m gonna do Jerry-esque things, but I'm still gonna sound like me.” No it’s more like, “I’m gonna sound just like Jerry.” And then the way I naturally, obviously don’t, that’s your personality.
Knowing which [time management] trick you need now—and which one you’ll need next time—comes with experience and the kind of situational awareness that can be cultivated with (wait for iiiiit…) time.
Ha, pun intended!
It turns out, not doing [what they enjoyed] was costing them time, was draining it away, little by little, like a slow but steady leak. They had assumed, wrongly, that there wasn’t enough time in the day to do [what they enjoyed], because they assumed (because we’re conditioned to assume) that every thing we do costs time. But that math doesn’t take energy into account, doesn’t grok that doing things that energize you gives you time back.
I like this idea: by spending time doing what you enjoy, you are sort-of earning more time — kind of like an investment I suppose. In other words, you don’t need to make time for what you enjoy, you need to do what you enjoy and that makes more time.
How does spending time on something make more time? Doing what you need to, what you enjoy, can help give you the energy to do everything else you need to.
Last month I redesigned my website, so it’s about time to do it again.
Yup. Sounds about right.
Baldur makes an interesting observation about how, in the early days of the web, it was likely the whole stack behind the website was closed: servers, browsers, operative systems, all closed source. This stands in contrast to today’s world where the server, database, client framework, operating system, are likely all open source (or made up of many open source components). He points out:
A majority of the value created by modern software ultimately comes from free and open source software.
It’s a good point. But many of the “open” aspects of today’s tech are starting to be open in theory, but essentially closed in practice:
There isn’t anything inherently proprietary about Eleventy Edge. In theory, there are a few “edge computing” services that should be able to support it, but in practice, the company that employed the project lead at the time and the only company actively funding the feature, is going to be the only one whose service is reliably supported.
This is so true of so many tools branded as “open” source but in practical, functional terms, they’re tied to a specific provider. But there are alternatives:
This is the reason why I’m excited about the partnership between Eleventy and CloudCannon and the project’s refocusing. It isn’t that the project will get simpler to use (though I’d be happy if it does) but the complementary nature of the collaboration creates a dynamic where every part of the project benefits the community as a whole, in a non-extractive way.
Pairing two technologies that complement each other is a powerful thing.
Jeremy, talking about his relationship to all the social networks coming and going these days, drops this indie web burn:
[Social networks are] all just syndication endpoints to me.
Mandy Brown nails it:
We have such a bias towards efficiency, towards optimization, that keeping open a messy process seems like an anti-pattern. But you can’t optimize exploration; you have to stay open to learning something unexpected, to turning around when you hit a dead end, to heading down a path you didn’t even know was there until you came upon it.
In the end, companies who operate with a shared belief in realizing inclusion and equity in their products (as well as their processes and organizational makeup) end up making better stuff than companies who want to tell good stories about inclusion, without all the heavy lifting. The former create a culture that leads to higher-quality products. The latter make great ads
Also an intriguing reference to this story detailing an exchange from an Apple shareholder meeting:
“When we work on making our devices accessible by the blind,” [Tim Cook] said, “I don’t consider the bloody ROI.” …“If you want me to do things only for ROI reasons, you should get out of this stock.”
(via adactio links)
Domains have so much potential as a personalized way to customize identities and as a decentralized way to verify reputation that builds off the existing web. For example, U.S. Senators have used the senate.gov domain to verify their identity on Bluesky without our involvement…The possibilities are wide in the domain-as-a-handle space.
Granted, this is often merely offloading identity verification to some other entity in the world — e.g. how does one get a senate.gov email? — but still I love it.
I’ve said it before, domains as internet handles will eventually break through. They’ll be the currency of online identity. Maybe I should’ve bought Google’s domain business…
A fascinating look at the implications of image optimization. The user experience of an image isn’t just when you see it on the product page, but it persists all the way until you receive the product in the mail and judge whether the image on the website matches your unboxing expectations in real life.
Until now, I’ve advocated reducing file size as much as possible even in cases where it reduced image quality. I’ve argued at times that a 1x image or a compressive image—a 2x image saved at low image quality and shrunk to fit—is good enough quality. What matters most is the speed of the image because of the impact on user experience and conversion rate.
While user experience is my primary concern, it isn’t the only one…
But what has a greater impact? A larger image file or a product that travels to a customer’s home, is shipped back to the merchant, and ultimately ends up in a landfill?
For ecommerce companies—particularly for those selling apparel—it likely makes sense to prioritize image quality above file size.
Ask yourself: If you visit the website of your local doctor's surgery to find out the opening hours, which browser is best: The one that displays the opening hours of the surgery, or the one that displays an XML parsing error message?
One of the great things about browsers is they're error-tolerant
Kind of wish more sites thought of their content this way.
Browsers are resilient — and we still manage to break them with our brittle websites.
AI in its current state is very, very good at one thing: modeling and imitating a stochastic system. Stochastic refers to something that’s random in a way we can describe but not predict. Human language is moderately stochastic. When we speak or write, we’re not truly choosing words at random—there is a method to it, and sometimes we can finish each other’s sentences. But on the whole, it’s not possible to accurately predict what someone will say next.
It’ll be interesting to see how writers adapt to these models. For example, “finish each others words”. Word probabilities are based on patterns and rules, so what will make real human writing stand out from AI writing is breaking these long-standing rules off which AI is based.
In this light, we might be in for one of the greatest creative periods in history. Not because AI will help us do what we’ve always done just more efficiently, but because AI will force us to do new things differently, as that’s the only way we’ll stand out and be different.
Neil deGrasse Tyson noting that the “if all you have is a hammer, everything looks like nail” phenomenon exists even in physics:
Seems like whatever’s your speciality, you think the solution lies within your expertise.
I have a confession: I used to not like CSS very much in my first few years of being a developer…Fast forward to today, where just thinking about CSS makes me giddy…I enjoy CSS.
So what changed? Well, I took the time to learn CSS.
I appreciate the candidness and honesty: I didn’t like it because I didn’t know it.
[AI] will not replace what you mean for the people around you.
There’s an tendency at times for organizations to treat performance as a checklist of sorts, particularly as we’ve seen the core web vitals metrics bring more attention to performance than ever before. You try to tick the box on those metrics to get them green, then call it a day. (This organization, to their great credit, did not do that.)
But none of that matters if those metrics aren’t painting a complete picture of how users interact with our sites.
Sometimes, it’s just so easy to follow somebody else’s outline for what you should do and check their box, than to actually think about it and decide for yourself — especially if it deviates from their outline.
Many companies are weighed down by all sorts of prior obligations to placate. Promises salespeople made to land a deal. Promises the project manager made to the client. Promises the owner made to the employees. Promises one department made to another.
Saying “Yes, later” is the easy way out of anything. You can only extend so many promises before you’ve spent all your future energy. Promises are easy and cheap to make; actual work is hard and expensive. If it wasn’t, you’d just have done it now rather than promised it later.
Trust me, “Yes, later” doesn’t work with little kids either.
Christian Selig on his Apollo debacle:
I'd rather the app just die if it would go to a company that would turn something I worked really hard on into something that would ruin its legacy.
Per my post about knowing when to quit, I respect that disposition.
I'm really sorry to those designers who didn't get to see their work launched in the app (to be clear, don't worry, I paid them all – there isn't some bs "exposure" agreement
Christian was a model for how to do icon design in an app and pay creators for it.
Lots of project management software will “take the true texture of a project” and flatten it “into a false linear representation of progress”.
a number can't represent the position of a project, or a piece of work. There's easier work, there's harder work, there's known work, there's unknown work.
What does 62% done mean when the "remaining" 38% of the work is twice as hard as the initial 62%?
If you do the easy stuff first, and leave the hard, or unknown stuff to the end, 62% done isn't just misleading, it's malpractice.
We all love a good number we can use in our presentations, our reports, our promises to others — doesn’t matter how wrong it represents reality.
Baldur explaining why he calls the text generated by LLMs the peak of the mundane:
algogen text is all about generating text as it’s normally used—it’s middling utilitarian as an explicit ideal—that makes it the opposite of poetry. Where poetry de-familiarises, algogen is perfect familiarity. Where poetry casts the unexceptional as exceptional, draws out the unusual beauty in the usual, and reuses language in original ways, algogen aspires to peak mundane. Even when it’s ostensibly creative, it only does so through postmodern mash-ups: mixing up tropes, conventions, and references. It is perfectly mediocre.
Which goes a long way towards explaining why people in tech love it.
Ooooohhh burn.
But seriously, maybe we have lowered the bar of what we expect from humans by declaring LLMs to be on par (or even better) than humans:
I’ve lost count of how many people in tech (and marketing, natch) who say that algogen text is just as good as that written by people. They genuinely don’t see the limited vocabulary, word repetition, incoherence, and simplistic use of sentence structure. They only aspire to perfect, non-threatening mediocrity and algogen text delivers that. They don’t care the role writing has in forming your own thoughts and creativity. They don’t care about how writing improves memory and recall. They don’t value the role of creativity in the text itself.
A great interview. Anita thoughtfully articulates the contrast between doing the work and managing the work and the need for experts in both.
both managers and ICs need to unblock projects, ruthlessly prioritize, rally groups of people, make hard decisions, and grow the people around them…
As a manager, you need all those skills to enable the people and organizations in your company. As an IC, you need all those skills to choose the right problems and then the right solutions. A company needs both roles, so they have healthy people and orgs that tackle the right problems intelligently.
Diversity is good:
why should [the] upper echelon of our company be the people who managed the best? What about the people who executed the best?
it’s better if [the people in the upper echelon of the company] look really different from each other! You want one super strategist, and one super negotiator, and one executor, or somebody who actually does the design work at an extremely high level. I think there’s room for all those people.
Also ❤️ this:
[give] people credit for failing when they took an intelligent risk
Whenever you try to force the real world to do something that can be counted, unintended consequences abound.
As noted in the article, the world begins to respond and adapt to your measuring once you begin to measure it.
As Tim Harford writes, data “may be a pretty decent proxy for something that really matters,” but there’s a critical gap between even the best proxies and the real thing—between what we’re able to measure and what we actually care about.
if more data isn’t always the answer, maybe we need instead to reassess our relationship with predictions—to accept that there are inevitable limits on what numbers can offer, and to stop expecting mathematical models on their own to carry us through times of uncertainty.
statistics can be used to illuminate the world with clarity and precision. They can help remedy our human fallibilities. What’s easy to forget is that statistics can amplify these fallibilities, too.
Worth remembering in our time of statistical models around language.
Tyler view transitions coming onto the web design scene:
As with all design elements, it will take restraint to design compelling transitions that aren’t annoying. I’m certain we’ll go too far and have to reel it back it. So it goes.
That's not to say that we never run these numbers, but it is to say that these numbers never run us.
It reminds me of that line from…I think it was _Fight Club_…something like, “Do you own your stuff, or does your stuff own you?”
all these metrics are downstream from simply making something people want to buy. And keeping your costs below the price you can sell it for. That's the hard part, even if it's a simple calculation.
One of my fundamental rules of system design is when people keep doing it wrong, the people are right and your system or idea is wrong. A corollary to this is that when you notice this happening, a productive reaction is to start asking questions about why people do it the 'wrong' way.
I think I will always prefer lightweight, non-typed scripting languages. I also over-index on code portability and the hypothetical need to migrate away from a technology, that’s on me as well.
100%. I love to over-index on the hypothetical theoretical—and I know, that’s on me.
I’ve built a lot of code modification pipelines over the years and you know what always breaks down? The code modification pipelines.
I think I will always prefer lightweight, non-typed scripting languages. I also over-index on code portability and the hypothetical need to migrate away from a technology, that’s on me as well.
It me.
I’ve built a lot of code modification pipelines over the years and you know what always breaks down? The code modification pipelines.
Extracts from a talk by Neil Postman at the National Convention for the Teachers of English in 1969. I hope to one day write something, anything, that is as relevant in fifty years as this is today.
First, he notes the importance of identifying bullshit:
As I see it, the best things schools can do for kids is to help them learn how to distinguish useful talk from bullshit. I will ask only that you agree that every day in almost every way people are exposed to more bullshit than it is healthy for them to endure, and that if we can help them to recognize this fact, they might turn away from it and toward language that might do them some earthly good.
There are so many varieties of bullshit I couldn't hope to mention but a few, and elaborate on even fewer. I will, therefore, select those varieties that have some transcendent significance.
Now, that last sentence is a perfectly good example of bullshit, since I have no idea what the words "transcendent significance" might mean and neither do you. I needed something to end that sentence with and since I did not have any clear criteria by which to select my examples, I figured this was the place for some big-time words.
Then he provides a great rundown of the flavors of bullshit we deal with every day.
Pomposity:
pomposity in that they are made to feel less worthy than they have a right to feel by people who use fancy titles, words, phrases, and sentences to obscure their own insufficiencies.
Fanaticism:
The essence of fanaticism is that it has almost no tolerance for any data that do not confirm its own point of view.
Inanity:
with the development of the mass media, inanity has suddenly emerged as a major form of language in public matters. The invention of new and various kinds of communication has given a voice and an audience to many people whose opinions would otherwise not be solicited, and who, in fact, have little else but verbal excrement to contribute to public issues
And superstition:
Superstition is ignorance presented in the cloak of authority. A superstition is a belief, usually expressed in authoritative terms for which there is no factual or scientific basis. Like, for instance, that the country in which you live is a finer place, all things considered, than other countries. Or that the religion into which you were born confers upon you some special standing with the cosmos that is denied other people.
It all has to do with values:
bullshit is what you call language that treats people in ways you do not approve of.
one man's bullshit is another man's catechism. Students should be taught to learn how to recognize bullshit, including their own.
So what’s one to do, against even your own bullshit?
It seems to me one needs, first and foremost, to have a keen sense of the ridiculous. Maybe I mean to say, a sense of our impending death. About the only advantage that comes from our knowledge of the inevitability of death is that we know that whatever is happening is going to go away. Most of us try to put this thought out of our minds, but I am saying that it ought to be kept firmly there, so that we can fully appreciate how ridiculous most of our enthusiasms and even depressions are.
The most concerning part about large language models is that they are thrusting more bullshit into the world because they’re built and trained on human language. Garbage in, garbage out; bullshit in, bullshit out.
Sensitivity to the phony uses of language requires, to some extent, knowledge of how to ask questions, how to validate answers, and certainly, how to assess meanings.
No one has ever complained that a website is too fast.
Reminds me of Bezos’ quote about how customers will never say, “I wish you’d deliver packages a little slower”.
on a platform like YouTube, the users are split into probably two major camps. There's the camp of people who just love the creation process and they make videos to express their creativity and they don't care about the platform itself. Those are the people who'd make videos even if YouTube wasn't a thing. And then there's the camp of people who see YouTube as a career or a business opportunity. They're in the game to make money and if a path to that is to just copy other people's content and thumbnails so be it.
Manuel puts his finger on something I’ve long felt around how I curate my RSS feed. I would estimate this is the makeup of my feed:
- 70% are individuals who blog to blog — no underlying monetization to their personal feed, they do it because they like to (e.g. Manuel himself).
- 20% are individuals whose blog constitutes some part of their business (e.g. Josh Comeau’s personal site, or Harry Roberts at CSS Wizardry)
- 10% are companies who exist as a business and have a blog (e.g. Netlify’s blog)
I like to stay abreast of what’s hot, but the real interesting, varied stuff comes from individuals who will “not stop doing what they do because they simply enjoy the process of creation”. Their content is more than just variations on a press release of whatever’s mainstream in tech at the moment.
Overkill is using five different products to run a single project. Overkill is an seven-stage interview process that exhausts everyone involved. Overkill is acting like a company 100x your size.
Overkill is building and maintaining a website whose composition and complexity is meant for a scale of billions and all you have is hundreds.
Amass what you need, but ignore even more.
Scrollbars are part of the area of the browser that is outside your scope of concern.
Similar to “you break it, you buy it” I like this theme Eric proposes for working with the browser: “you modify it, you’re responsible for it.”
If you modify the Operating System scrollbar’s appearance, it is now on you to ensure that it meets Web Content Accessibility Guidelines criteria.
If you override X default functionality, it is now on you to provide all defaults the browser does (e.g. prevent default form functionality). That kind of awesome responsibility might make us stop and question our choices a little more often.
When you override this expression, you’re indirectly communicating that someone’s personal preferences are less important than your own visual sensibilities.
Blogging:
Imagine that me being bad at self control, and maybe bad at life generally, but trying to write about it honestly, could be the nudge someone needed to emphasize their own priorities in life.
Writing is thinking:
If you had asked me when I was in school what the purpose of writing is, I would have said something like putting what you think into words. What I understand now is that the very act of writing can change your thinking. Writing is not mere transcription. It can be a way to think more clearly and to produce thoughts
Also this:
When a topic or question comes up that I have written about, I love being able to just provide a link to my post instead of trying to recreate my past thoughts in a way that will inevitably be less coherent than my post.
I don’t always feel like a successful, responsible adult. But when I do, it’s because someone asks what I think on a topic and I can respond with a link saying, “I wrote it down.”
At most companies, people put together a deck, reserve a room (physical or virtual), and call a meeting to pitch a new idea. If they're lucky, no one interrupts them while they're presenting. When it's over, people react. This is precisely the problem.
The person making the pitch has presumably put a lot of time, thought, and energy into gathering their thoughts and presenting them clearly to an audience. But the rest of the people in the room are asked to react. Not absorb, not think it over, not consider — just react. Knee-jerk it. That's no way to treat fragile, new ideas.
Well, when you put it that way, that does sound pretty crazy.
Me personally, I’m all for distilling your thinking through writing and allowing people the time and space for consideration — though all this “AI writes for me” seems sometimes the antithesis of that.
At Mozilla and Microsoft we kept running into the same issue: web products with tons of senseless HTML, mostly unused CSS and an absolute avalanche of JavaScript sent to the end users with no benefit to them. All the benefits were for the convenience of the developers and the flexibility to build whatever with a framework that promised optimised output.
Web languages are often seen as compile targets.
The JavaScript ecosystem’s reliance on a single centralized module registry conflicts with the web's decentralized nature. With the introduction of ES modules, there is now a standard for loading remote modules, and that standard looks nothing like how module loading through NPM works. Deno implements loading ES modules through HTTP URLs - allowing anyone to host code on any domain simply by running a file server.
This approach offers great benefits. Single file programs can access robust tooling without needing a dependency manifest
Deno is webby.
“This idea of [AI] surpassing human ability is silly because it’s made of human abilities.” [Lanier] says comparing ourselves with AI is the equivalent of comparing ourselves with a car. “It’s like saying a car can go faster than a human runner. Of course it can, and yet we don’t say that the car has become a better runner.”
Humans are special, we too easily forget that. Lanier:
A lot of modern enlightenment thinkers and technical people feel that there is something old-fashioned about believing that people are special – for instance that consciousness is a thing. They tend to think there is an equivalence between what a computer could be and what a human brain could be. We have to say consciousness is a real thing and there is a mystical interiority to people that’s different from other stuff because if we don’t say people are special, how can we make a society or make technologies that serve people?
Also, this part reminded me about what Chris Coyier wrote on how Google, once the bastion of pointing people to invididual websites, now seems keen on using AI to simply suck up the knowledge on individual websites and spit out a word smoothie rather than send people to your website.
There are two ways this could go. One is that we pretend the bot is a real thing, a real entity like a person, then in order to keep that fantasy going we’re careful to forget whatever source texts were used to have the bot function. Journalism would be harmed by that. The other way is you do keep track of where the sources came from. And in that case a very different world could unfold where if a bot relied on your reporting, you get payment for it, and there is a shared sense of responsibility and liability where everything works better.
Lanier’s warning on the ultimate danger of AI:
To me the danger is that we’ll use our technology to become mutually unintelligible
(Found via Eric Bailey’s excellent links newsletter.)
customers hate MVPs. Startups are encouraged by the great Reid Hoffman to “launch early enough that you’re embarrassed by your v1.0 release.” But no customer wants to use an unfinished product that the creators are embarrassed by. Customers want great products they can use now.
“MVPs are great” said only the peddler of the product, never its customer.
It might be great for the product team, but it’s bad for customers. And ultimately, what’s bad for customers is bad for the company.
Also this, 10000%:
The customer should have a genuine desire to use the product, as-is. Not because it’s version 0.1 of something complex, but because it’s version 1.0 of something simple.
Products that do less but are loved, are more successful than products which have more features, but that people dislike.
…There are many ways to generate love. “Minimum” and “viable” are definitely not two of those ways.
In contrast, there’s a SLC product:
A SLC product does not require ongoing development in order to add value. It’s possible that v1 should evolve for years into a v4, but you also have the option of not investing further in the product, yet it still adds value. An MVP that never gets additional investment is just a bad product. An SLC that never gets additional investment is a good, if modest product.
Every book I read has a hundred summaries on the internet, each more detailed and comprehensive than mine, but I still take book notes because I want to remember what impacted me. Even if an AI knew what those things were, delegating that work would defeat the purpose.
Precisely!
I like to think of CSS as a conditional design language. Over the years, CSS was known as a way to style web pages. Now, however, CSS has evolved a lot to the point you can see conditional rules. The interesting bit is that those CSS rules aren’t direct (i.e: there is still no if/else in CSS), but the way features in CSS work is conditional.
I never really thought of this, but I like it: there’s no if/else in CSS, but it’s very much conditional. The conditionality of CSS is part of what makes it so much more powerful than today’s design tools like Figma — this is a great image that illustrates that gap.
Also: did you know media queries are actually called “CSS Conditional Rules Module” in the spec? I didn’t.
To me, CSS is like a superpower because it allows me to make so many design decisions through its conditional features. Working with design tools can sometimes feel limiting because I feel like I’m constrained within certain walls. I think that the ability to create conditional rules with CSS is what sets it apart and makes it powerful for web design.
A great talk from Maciej Ceglowski in 2014.
Our job is to get [the people] connected, maybe put a decent font on the knowledge, and then watch the fireworks happen.
What happens next is wonderful and unexpected. It’s a humble vision of technology. We know how to make the tools but we don't believe that we're smarter than seven billion people in how we choose to use them. We say, “Surprise us.” And then we iterate back when we see something cool they're doing. Then maybe we try to design different tools to make that happen more easily.
A lot of the interesting things that happen in technology are not things we planned or anticipated, yet we still like to think this next time around we’ll design things from the ground up.
When Twitter came out, they didn’t even anticipate things like the hashtag...again and again we have this history of not really knowing how our technologies are going to be used but it doesn't stop us from thinking we can design the next ones from the ground up.
When it comes to choosing what to do, it's always binary for me.
Yes or no.
Now or not now.
Do or don’t.
What about maybe? Maybe is no (for now).
What about some grey area between now or later? Anything other than now rounds down to later.
Being binary about what you choose to do brings clarity to what needs to be done.
“Maybe means no (for now)” is very relevant to me.
Jacobo Prisco has a piece in WIRED noting how some aircraft, including 747s, still rely on floppy disks for their software updates.
It is possible to upgrade from floppy disks to USB sticks, SD cards, or even wireless transfer, but doing so could cost thousands of dollars—and mean making a change to something that, while archaic, is known to work.
Clive Thompson, in his newsletter, leaves this commentary:
“While archaic, is known to work.” That final clause highlights why these floppy-disk-equipped 747s are a terrific object lesson in the challenges of embracing “cool new tech” in domains where error is super freaking bad. Pilots love flying older jets because they are well-known quantities; any bug or quirk was found out years ago. Do you really wanna gamble that the new data-transfer system for your 747 replicates data precisely as the old floppies did? I mean, it probably will. But, y’know, a 747 weighs 455 tons at takeoff; marginal errors matter.
Not a place for JavaScript I think.
How have I never heard of the term “shiterate”?!
The constant tension of shipping faster versus shipping better. Falling into a cycle of "Ship, then iterate" is a trap. It ends up being more shiterate. Things happen and that "fast-follow" V1.1 release or V2.0 you had imagined probably won't. There's always a new shiny initiative, re-org or new leadership hire that throws a wrench into things and changes all plans. Don't rely on a future release to clean up today's mess.
“Quality” is so much more than pleasing aesthetics and abundant animations.
Quality is all-encompassing.
It’s accessibility so that everyone can use it.
It’s performance and ease of use so it respects the user’s time and helps them accomplish their tasks when they need.
It’s reliability so they can feel good about having this tool in their pocket for when they need it.
It's durability with designs and components that can scale to withstand future needs and uses.
It’s well-considered.
It’s the feeling that a lot of time and care went into creating the product: that someone has already thought of you.
Of course, it can also be a bit extra and bring delight in unexpected places and important moments.
The author asks a great question: if we all agree quality is a good thing, and nice to have, why isn’t there more quality in the world?
Liberal use of "MVP" or "it's just an experiment" [is often a way] to skirt around typical quality standards and ship something subpar?
…Ditch the term MVP and use SLC (Simple, Lovable, Complete).
I like that there’s no silver bullet peddled in the article. Quality is hard. And it’s a way of working more than it is a measurable quantity.
teams should be working in a way where everything is considered and there's a framework for identifying, discussing and prioritizing quality-related issues so that quality is a bit less of a sisyphian task.
This resonates with my experience:
What doesn't work in my experience is marking tickets you can't get to before your deadline as design debt, tech debt or polish to tackle later. Polish is an easy word to equate to design tickets but it's an especially poor choice as most people tend to think it's defined as optional, obsessive and unnecessary details.
Emphasis mine:
In practice, sign-off leads to disappointment for everyone involved. A design created in isolation in a graphics-design tool almost never survives contact with the reality of the web. The client is disappointed that the final output doesn’t match what was signed off. The developer is disappointed that they weren’t consulted sooner. The designer is disappointed that the code doesn’t match the design.
I really like this acknowledgement that, in practice, sign-offs function as tacit promises that are too easily broken (UI mocks through “hand off” often follow this same pattern).
Making a comp for sign-off is like making a promise. When the finished product doesn’t match the comp, it’s like a promise has been broken.
By combining design and development, there are no promises to be broken. When you show something to the client, it’s already in the browser. It’s already in the final medium. Instead of saying, “we promise to make this,” you’re saying “here’s what we’re making.”
[Paul Ford] as an advisor…people come to me and ask[:] How do I reconstruct the world so that it appreciates me?
[Rich Ziade] Your money’s not in a safe deposit box at the bank.
[Paul Ford] No, it’s in a database.
[Rich Ziade] It’s a row in a database, right?
[Paul Ford] Yes, so much of our lives are rows in databases. It actually probably should scare us all day long…Because have you ever seen what engineers do? Have you ever met a [data base engineer]? Yeah, it’s not good.
our typical methods for measuring intelligence—IQ tests and various university-style examinations—rarely if ever consider someone’s ability to, say, effectively deescalate a violent encounter, or interpret body language within and across cultures, or sit meditatively without looking at one’s phone every ten seconds. Those skills are positioned, at best, as supplementary to actual intelligence
if you scratch the surface of any notion of intelligence, you run headlong into a belief system that renders some people more intelligent—and therefore more valuable, more worthy of attention or care—than others.
When you’re advertising or talking about a company, naming competitors is as much about choosing as it is about observing. If you want to make your product sound valuable, you compare it to a competitor with an expensive product, not a free one.
The reality is that most people don’t have the time or need to understand the differences between different companies and products. If the first thing they try works, then great - even if it isn’t the perfect, most efficient way to do it. This is, after all, what advertising is: a way of telling people that some product is the thing that they need.
The problem is that it’s real hard to argue against shitty design and product decisions. If junk data rules your organization then it’s almost useless fighting; when you see your customers as links in a spreadsheet or tiny dots in a graph then every terrible design decision under the sun can be justified. Heck, in most cases junk design isn’t permitted but preferred.
(If the numbers are the most important thing, then your website will suffer the consequences.)
Robin on point!
The products I adore the most are the ones I want to return to because they respect me as a person,
I see AI as a prison. Slamming AI into products like everyone’s doing right now is mostly an excuse not to think critically about hard problems…
AI is a prison because it traps us, it tricks us. It’s far too easy to forget that what’s happening under the hood is a bunch of similar words being slapped into each other over and over again and then hoping for the best. It’s a charade of intelligence that we mistake for actual intelligence. But alas, Artificial Intelligence sounds much more impressive than Artificial Guessing in a slide deck.
Slapping stuff together and hoping for the best, tell me that doesn’t describe us humans quite succinctly — the created modeling its creator.
As you can see, the sharing options depend on what the user has installed on their device. This is great because as the developer, you don’t have to care about what social networks they use—the list of options is always relevant. And if they don’t use social networks, no big deal—there are still options for things like texting, email, and copy to clipboard.
No ugly buttons. No tracking. No free advertising for social media giants. Are there any downsides to using the Web Share API?
Good example of not trying to assume everything yourself. Defer to your users’ preferences!
(Reminds me of Jeremy’s work around a native button[type=share]
).
Paul Ford:
[By design] the architecture of the giant organization is built to insulate itself against an enormous amount of failure.
And it’s built to create irrelevance in human beings.
by chasing trends we would never be the ones to set them.
If you do what everyone else is doing, how will you do what no one else has done?
What’s really concerning is when everyone is consumed with the technology-first and the problem-last.
I’m certain now that if you want to build something great you have to see through the tech.
I can tell you right now: I don’t follow a blog because of its design or tech stack.
if you want to build anything substantial, if you truly want to build a great product, then the technology has to come last.
Few of my tweets maneuvered past 100,000 “impressions”, but this one most definitely did…last I checked, [it] got the attention of over a quarter-million individuals and/or machines
Lol I appreciate the author’s honesty here in noting the virality of their own tweet. Could be majority bots, who knows?
Be wary of the opaque signals we use for social validation.
most product orgs suck and churn out garbage projects because they waste so much time thinking in terms of junk data and half baked user inputs to inform their decisions.
(Show me what your org measures and I’ll show ya the crappy product that comes out the other side.)
Lol also this:
NPS scores—the NFTs of product management—
When it comes to measuring:
everyone in the field believing that they’ve built a science when they’ve really built a cult.
Honestly hard not to copy paste this whole article:
This numerical value sure is bullshit but it’s not even helpful bullshit because these numbers never explain why things suck.
(Just look at the product and it will tell you why it sucks.)
Boom! This phrase will stick in my head for years I think:
the only way to build a great product is to use it every day, to stare at it, to hold it in your hands to feel its lumps. The data and customers will lie to you but the product never will.
I feel a lot like Robin:
I don’t care what the data shows me and I’m not sure I ever will. You can show me charts and spreadsheets all day long and I will not care. Tell me what your gut says after relentless experience of the product every day. This is the only way to see the world clearly.
I just love this piece so much
You can only build a great product if you care more for the vibes than for the data.
The act of spending that time in those [RSS] feeds still feels like a very deliberate, intentional act. Curating a set of feeds I find interesting and making the time to read them feels like an investment in myself.
I think there’s hidden value in the act of curating your own feed — plant new stuff and see if it takes root, pull out weeds, etc. — rather than offloading that work to some algorithm.
I really miss UI design that made controls obvious. Clear affordances. All buttons obviously buttons, all text input fields obviously text input fields...20 years ago we bent over backwards to make the purpose of every control as obvious as possible; the style today is to make everything look like flat static text.
Some on-point observations of where we are today in UI design.
Depth and texture in UI are good things. Our displays have never been better suited to displaying color and fine detail, but our UI themes today look like they were designed for output on a crummy old laser printer or something.
Some interesting thoughts on SSG vs SSR, especially given there could be a pendulum swing away from build steps coming:
This site is dynamic; it’s hosted on fly.io which makes it super easy to run a server. Requests are handled by a server which query the graph and dynamically render the content. Cloudflare caches the requests.
Why not use a static site generator? The mental model of a static site generator has always felt more complex to me. I just want to handle a request and do stuff. The nice thing is the complexity can grow to my needs; when I’ve used static site generators before, they always start simple but it always turns into a headache to do anything more than simple markdown routes.
SSG also impedes the frictionless publishing workflow. I can write this content and press Cmd+Shift+e and see it immediately on my site. That’s amazing.
There’s no compile step to wait for. I just upload some new content and it’s immediately available.
SSG favors run-time simplicity over build-time complexity. That’s a tradeoff many are willing to make, but I find having to commit my content and push to github just to publish content way too much friction.
The economic basis of the internet is surveillance.
It’s an interesting phenomenon that people don’t feel quite the same about commercial surveillance as they do government surveillance.
- Gov is spying on us? [Pitchforks] Ah, hell no!
- Private company is spying on us? Meh.
If the gov made you carry a device with incredibly sensitive personal details like geolocation metadata, there could be protests in the street. But if it’s from a commercial entity we’ll do it voluntarily.
Google [is] the world’s defacto internet server. It occupies a dominant position in almost every area of online life. It’s unremarkable for a user today to connect to the internet on a Google phone using Google hardware talking to Google servers on a Google Browser while blocking ads served by Google AdSense on sites where trackers use Google Analytics and DoubleClick and whatever else
Sarcasm can be the cheap way out:
Sarcasm “works” because it alludes to a critique without ever actually making it. It shifts the burden of substantiating the criticism as an exercise for the audience and further suggests that if they don’t already understand it then they are deficient. Making a critique implicit is an unassailable rhetorical position. The most socially acceptable response for the group is to go along with it, as you have given them nothing specific to challenge. And if someone does challenge it you can simply demur and say it was “just a joke.”
Create some value:
If you want to make a critique then do it explicitly and earnestly. Take a position of your own and defend it. It takes a lot more work but in exchange it holds the promise to create a great deal more value for society.
Also a good tidbit for front-end folks:
On any topic of substance there are bound to be valid critiques of any given position. Real questions are almost never settled in terms of right or wrong but rather how best to balance the competing equities of various solutions. 
In at article about snake case and why it’s “the fairest of them all”, Pedro starts his article practically:
If you're working with code that already has a case style, just use that. An imperfect convention is better than two competing ones.
Acronyms are a particular pain case I encounter a lot.
With other cases, you need to decide how acryonyms should be capitalized: fetchRssFeedAsXml
or fetchRSSFeedAsXML
? Should you always CAPS LOCK acryonyms? Or never do so?
XMLHttpRequest
is a classic example of mixed casing baked right into the web platform: capitals are used both syntactically (breaks in words) and semantically (abbreviations).
This shook my world. I guess I’m team snake case now.
This is enshittification: surpluses are first directed to users; then, once they're locked in, surpluses go to suppliers; then once they're locked in, the surplus is handed to shareholders and the platform becomes a useless pile of shit. From mobile app stores to Steam, from Facebook to Twitter, this is the enshittification lifecycle.
As an example, here’s what Tiktok has been up to:
For You is only sometimes composed of videos that Tiktok thinks will add value to your experience – the rest of the time, it's full of videos that Tiktok has inserted in order to make creators think that Tiktok is a great place to reach an audience.
[citing a Forbes piece of reporting] "Sources told Forbes that TikTok has often used heating to court influencers and brands, enticing them into partnerships by inflating their videos’ view count. This suggests that heating has potentially benefitted some influencers and brands — those with whom TikTok has sought business relationships — at the expense of others with whom it has not."
…For Tiktok…"heating" the videos posted by skeptical performers and media companies is a way to convert them to true believers, getting them to push all their chips into the middle of the table, abandoning their efforts to build audiences on other platforms (it helps that Tiktok's format is distinctive, making it hard to repurpose videos for Tiktok to circulate on rival platforms).
Ah yes, the old proprietary format trick. Will we never learn?
[as a publisher of content, platforms] can make more money by enshittifying their feeds and charging you ransom for the privilege to be included in them.
This is really the subtle shift here taking place: products that give you what they want you to see, instead of what you asked for. (This is why RSS feeds are amazing: nobody can get into your feed or be prioritized in it unless you say so.)
When it comes to “recommendations”, navigating the incentives between “what’s best for you” (the user) and “what’s best for us” (the provider) will be an evergreen problem.
Google has 175,000+ capable and well-compensated employees who get very little done quarter over quarter, year over year. Like mice, they are trapped in a maze of approvals, launch processes, legal reviews, performance reviews, exec reviews, documents, meetings, bug reports, triage, OKRs, H1 plans followed by H2 plans, all-hands summits, and inevitable reorgs.
If there’s one thing that drives me nuts, it’s spending time doing work that’s not the work. It’s cray-cray to me that orgs spends weeks on these review cycles and it’s just accepted, like “It’s performance review season, everything’s on hold.”
very few Googlers come into work thinking they serve a customer or user. They usually serve some process (“I’m responsible for reviewing privacy design”) or some technology (“I keep the CI/CD system working”). They serve their manager or their VP. They serve other employees. They will even serve some general Google technical or religious beliefs (“I am a code readability expert”, “I maintain the SWE ladder description document”). This is a closed world where almost everyone is working only for other Googlers, and the feedback loop is based on what your colleagues and managers think of your work.
AI’s main moves are to segregate and divide, to make predictions of the future based on an extension of the past—that is, to preserve, and to increase, a status quo of inequality.
there is a seemingly inexhaustible supply of individuals who have both large platforms and ill-informed opinions.
What’s the difference between lying and bullshitting?
A liar knows what they are saying is false. A bullshitter doesn’t care whether it is true or false. The liar has not abandoned all understanding of truth, but they are deliberately trying to manipulate people into thinking things are otherwise than they actually are, whereas the bullshitter has simply stopped checking whether the statements they are making have any resemblance to reality.
Rather than look inside ourselves for the confidence, we can be so desperate to look to others to tell us we’re doing The Right Thing™️
many of us are far too easily swayed by confident people who pose as experts, especially on subjects where we don’t have the knowledge ourselves to evaluate the claims being made. I suspect that the careers of [bullshitters] have been made possible in large part by [their] astonishing levels of confidence in themselves.
This is so true too:
Intelligence is not actually, of course, a single quality, and plenty of people who know how to do one thing well (such as trade cryptocurrencies or develop real estate) know precious little else. In fact, if someone has devoted their entire life to the pathological pursuit of riches, they are likely to be very ignorant of a lot of the world’s knowledge, because much of it simply won’t have been relevant to their area of interest.
The sad reality is: bullshitting is rewarded.
the problem is far deeper than the algorithms of Twitter and Facebook feeds. We also have a culture in which arrogance is rewarded rather than kept in check, and people can see that with enough shameless bluster you might become the richest person in the world or the president of the United States.
What can we do?
We are trying to create a culture of thoughtfulness and insight, where people check carefully to see whether what they’re saying is true, and excessively egotistical people are looked upon with deep suspicion. With time and patient effort, perhaps we can create a world in which the people who rise to the highest offices and reap the greatest rewards are not also the ones who are most full of shit.
One of the best talks on creativity I’ve heard. Definitely worth the ~30 minutes.
We like to do easy things.
It’s easier to do trivial things that are urgent than it is to do important things that are not urgent (like thinking). And it’s also easier to do little things we know we can do, than to start on big things we’re not so certain about.
And creativity is hard. There’s discomfort and anxiety:
If we have a problem, and we need to solve it, until we do we feel inside us a kind of internal agitation, a tension, an uncertainty that makes us just plain uncomfortable, and we want to get rid of that discomfort. In order to do so, we take a decision. Not because we’re sure it's the best decision but because taking it will make us feel better.
Well, the most creative people have learned to tolerate that discomfort for much longer and so just because they put in more pondering time their solutions are more creative.
The trick is to live in that discomfort:
[Donald MacKinnon] discovered that the most creative professionals always played with the problem for much longer before they tried to resolve it. They were prepared to tolerate that slight discomfort and anxiety that we all experience when we haven’t solved a problem.
John goes on to clarify:
Please note, I'm not arguing against real decisiveness. I’m 100% in favor of taking a decision when it has to be taken, and then sticking to it while it’s being implemented. What I’m suggesting to you is that before you take a decision you should always ask yourself the question, “When does this decision have to be taken?” And, having answered that, you defer the decision until then in order to give yourself maximum pondering time which will lead you to the most creative solution.
And if while you’re pondering somebody accuses you of indecision, say, “Look baby cakes, I don’t have to decide until Tuesday and I am not chickening out of my creative discomfort by taking a snap decision before then. That’s too easy.”
Love it.
the most essential aspects of writing: grappling with uncertainty, confusion and insecurity, and allowing time and space to consolidate ideas and form new associations.
As appealing as the prospect of an automated writing assistant may seem, far better would be a writing coach that resists the impulse to spoon-feed us ‘the answer’. A coach prompts us to pause, reflect, reconsider, revise. Lex is a far cry from the ‘thought partner’ it claims to be. It more closely resembles Gmail’s Smart Compose: someone always ready to interject with sensible, middle-of-the-road suggestions, foreclosing possibilities before we have even had a chance to consider them.
We create AI — and automation — to do hard things for us instead of help us do hard things.
If we want to make sense of what we are writing, and why, we can dispense with pseudo-oracles imposing their ideas on us. We have to think for ourselves which, by extension, means writing for ourselves.
Writing is refined thinking.
the term [AI] is so vague as to be meaningless. Sometimes—though rarely—AI refers to general artificial intelligence. Sometimes AI refers to machine learning. Sometimes AI refers to large language models. Sometimes AI refers to a series of if/else statements. That’s quite a spectrum of meaning.
This is exactly what App stores were advertised as to prevent. The web was a wild, untamed and terribly unsafe place full of software you can’t trust. App stores, instead, are curated and safe havens of only tested and tried, genuine software. Until someone pays enough to get their app listed with the right keywords.
You either die a hero, or you live long enough to see yourself become a villain.
search engine results have become advertising in disguise, with the first 10 results either being flat out ads or those who spent a lot of money on advertising or shifty SEO tricks to show up first. It’s like a run-down shopping mall, with no local products or employees and chain stores selling knock-off products rather than the high quality ones.
Wow, that kind of nails it right on the head. I think very specific technical queries still give decent results but anything consumer, like “best laptop for college kids”, is absolutely worthless.
Alex Russell:
Once the lemon sellers embed the data-light idea that improved "Developer Experience" ("DX") leads to better user outcomes, improving "DX" became and end unto itself, and many who knew better felt forced to play along. The long lead times in falsifying trickle-down UX was a feature, not a bug; they don't need you to succeed, only to keep buying.
“DX” as “trickle-down UX” lol, I haven’t heard that one.
The key thing about the tools that work more often than not is that they start with simple output. The difficulty in managing what you've explicitly added based on need, vs. what you've been bequeathed by an inscrutable Rube Goldberg-esque framework, is an order of magnitude in difference. Teams that adopt tools with simpler default output start with simpler problems that tend to have better-understood solutions
If you’re well versed in HTML, CSS, and vanilla JavaScript, but you’re not up to speed on pipelines and frameworks, you’re going to have a hard time.
That doesn’t seem right. We should change it.
This seems right. I feel like there’s a metaphor here I’m reaching for but can’t quite articulate…
Surely other professions feel this pain where the growing complexity of the field pinches out a certain kind of practitioner and the field becomes more poor without those folks?
Paul Ford on leadership:
I used to feel that to demonstrate leadership I had to know everything. I had to show that I knew everything top to bottom.
[what is actually the case is] you don’t know everything. You’re just the guy in charge. So drive forward and let everybody tell you what you’re missing.
Also, Paul on the traditional enmity between disciplines like engineering and marketing:
The most brutal fact of life is that the discipline you love and care for is utterly irrelevant without the other disciplines that you tend to despise.
All programming philosophies can be boiled down to an opinion on how to deal with state. Examples:
Monolith - Modifying state distributed among many services is hard to get correct; keep it centralized.
Service-Oriented-Architecture - Modifying all of the state in one service is hard to get correct; distribute it among multiple services.
Lol. But I do think there’s something here. Contrast manifests trade-offs and that the silver bullet is a myth.
Every programming philosophy is about how to manage state, and each philosophy comes with trade-offs. What this means is that there is no "one true way" to deal with state, and that each programming philosophy is useful and important in the correct domain.
When in doubt, blog it out
A great catchphrase! Blog your problems, because one of the following will happen:
- You’ll solve your problem while writing out your problem (Best)
- Someone responds who knows how to solve your problem (Great)
- No one responds and you learn you have a unique problem (Less great, but novel)
Also, this line casting SEO as analogous to mysticism is everything I love about Dave’s writing:
“Use this tag” or “Don’t do this weird combination of HTML” SEO-tricks probably have an impact, but advice hits me like “Be sure to arrange the energy crystals on your homepage in a certain way.”
SEO optimised content, AKA fucking garbage
Lol, yeah.
I had video call this morning with someone who was in India. The call went great, except for a few moments when the video stalled.
“Sorry about that”, said the person I was talking to. “It’s the monkeys. They like messing with the cable.”
At first I thought this was some reference to chaos engineering and the chaos monkey, but nope. Turns out this was real monkeys.
There’s something charming about an intercontinental internet-enabled meeting being slightly disrupted by some fellow primates being unruly.
Why does it seem like a radical idea now-a-days to say: when I search for something the first result should be the thing that most closely matches what I searched for.
Here’s something I think we should all agree upon: when a willing speaker wants to say something to a willing listener, our technology should be designed to make a best effort to deliver the speaker’s message to the person who asked to get it.
I hope this is self-evidently true. When you dial a phone number, the phone company’s job is to connect you to that number, not to someone else. When you call Tony’s Pizza, you expect to be connected to Tony’s Pizza — not to Domino’s, not even if Domino’s is willing to pay for the privilege…
If you follow someone on social media, then the things that person says should show up in your timeline.
That is not a radical proposition, but it is also not the case today. Facebook, Twitter, TikTok, YouTube and other dominant social media platforms treat the list of people we follow as suggestions, not commands .
This is why I love RSS: if I subscribe to something, it will show up in my feed. End of story.
one of Brian Eno’s Oblique Strategies has it: “Be the first person to not do something that no one else has ever thought of not doing before.”
It is insidious the way that truth and falsity are so thoroughly and authoritatively mixed together.
Large language models are not databases. They are glommers-together-of-bits-that don’t always belong together.
Like most people, my head is full of thoughts and stuff I have opinions on. I conduct enthusiastic debates in my head, and internally anyway, I’m always super convincing, right?
But when I sit down to write a blog post about something — that’s when I have to figure out what I really think, and what I really know, about a subject.
To feed my blogging, I am constantly reading books, magazine articles, academic papers, and a sprawling network of blogs…Much as writing catalyzes thinking, reading catalyzes writing; the vast majority of ideas I get for blog posts come from reading
Love that line: "Writing catalyzes thinking, reading catalyzes writing.” Next time somebody asks, “How do you blog so much or know what to write about?” One answer will be: “I read a lot.” Who doesn’t have thoughts after reading?! Write them down next time.
I’ll repeat this until I’m hoarse, but the apps that best support accessibility features for disabled people invariably are also the most usable for everyone.
In truth, there is no secure software supply-chain: we are only as strong as the weakest among us and too often, those weak links in the chain are already broken, left to rot, or given up to those with nefarious purposes.
Whenever I bring up this topic, someone always asks about money. Oh, money, life’s truest satisfaction!
…but at some point, it becomes unreasonable to ask just a handful of people to hold up the integrity, security, and viability of your companies entire product stack.
…what we’re asking some open source maintainers to do is to plan, build, and coordinate the foundations for an entire world.
Interesting how passion projects are about quality and a sense of intrinsic satisfaction that comes from that kind of slow, artful approach to building software. Throwing money at the issue doesn’t work because people throw money at issues that are sticky and difficult and nobody wants to do. That’s why they pay you to do it.
Future of software might just be like any other item: it’s born, it lives, and it dies. The circle of life:
the maintainers of the Gorilla framework did the right thing: they decommissioned a widely used project that was at risk of rotting from the inside out. And instead of let it live in disarray or potentially fall into the hands of bad actors, it is simply gone. Its link on the chain of software has been purposefully broken to force anyone using it to choose a better, and hopefully, more secure option.
I do believe that open source software is entitled to a lifecycle — a beginning, a middle, and an end — and that no project is required to live on forever. That may not make everyone happy, but such is life.
Speaking about AI tools like chatGPT, Dall-E, and Lensa:
It is no exaggeration to say that systems like these pose a real and imminent threat to the fabric of society.
Why?
- these systems are inherently unreliable, frequently making errors of both reasoning and fact…
- they can easily be automated to generate misinformation at unprecedented scale.
- they cost almost nothing to operate, and so they are on a path to reducing the cost of generating disinformation to zero
I feel like part old-guy-yelling-at-cloud, but I am genuinely concerned about tools like this. Given how poorly we as society have used the tech we’ve created thus far, I’m not sure we’ll do much better with the next round of advancements.
Nation-states and other bad actors…are likely to use large language models as a new class of [weapon]…For them, the…unreliabilities of large language models are not an obstacle, but a virtue…
[they aim to create a] fog of misinformation [that] focuses on volume, and on creating uncertainty…They are aiming to create a world in which we are unable to know what we can trust; with these new tools, they might succeed.
seeing requires patience, requires letting the sight of something come to you, requires not reacting before you’ve seen fully. And looking more closely I think he has a very good point: which is that we live in a world full of distractions but short on breaks. The time between activities is consumed by other activities—the scrolling, swiping, tapping of managing a never-ending stream of notifications, of things coming at us that need doing. All that stuff means moments of absolutely nothing—of a gap, of an interval, of a beautiful absence—are themselves absent, missing, abolished.
Not gonna lie: I almost got interrupted while reading this short thing because I was filling a few minutes in between time catching up on my RSS.
if a high fidelity mockup has just been put together without planning and circular communication between developers, designers, project managers and clients/stakeholders then there will be massive oversights
Spot on in my experience. If you don’t build together, you get:
complicated interaction[s] that would be simplified with content strategy or content that follows no logical order, but looks nice, visually
But the larger point his one about process: a deliberate, slower process for “building designs” gives him precious time to actually think about what he’s doing.
it takes me ages to even get to the point of writing code and that’s been the case for a lot of years now. The benefit to being like this though is I get to put out fires before they even start. If I’m racing along trying to write code as fast as possible, I’m gonna start plenty of fires instead.
It does feel, sometimes, like we’ve made everything available at our fingertips so quickly, that we don't take the time to actually stop and think about what we’re doing. It’s just GO! GO DEVELOPER GO!
The world is full of noise because we are not in control of our information technology but the other way around…Writing is rewriting and rewriting until the thought becomes clear. AI may help here and there pointing you to unclear elements, but if AI writes for you, you will stop thinking.
AI will have eaten all our hobbies long before it fired us from our job
AI acts and feels like cancer. It grows uncontrolled out of our organic knowledge, and it grows where that organic knowledge already has developed some carcinogenic tissue.
How are you human if you leave understanding reading, thinking, writing, caring, and loving to a processor?
A critical intelligence is one that doesn’t accept the society and culture around us as a given, and demands explanations for it. Sometimes, when we do that, we find that things that seemed “normal” are actually incompatible with basic principles of justice. We come to see the familiar as strange again, and be unsettled by things we once accepted.
I used to hate being a professional critic. Critics are negative. Critics are the people who watch a movie that has taken hundreds of people a year to make, and have the audacity to just give it a “thumbs down.” They produce nothing. Their work is easy, because anything can be criticized. But over time, I’ve come to embrace the critical side of myself a bit more, because critics do something essential: they help people articulate their feelings and figure out why they don’t like certain things... The job of the critic is to help us find the words, and finding the words to explain a problem is a precondition of discussing a solution. The critic asks tough questions. A critic who hates a work of art we love might help us see it in a new light, and wonder what the sources of our taste are. Correspondingly, a critic who praises something we loathe might have found virtues in it we overlooked.
I don’t mean to encourage greater negativity in a world already overflowing with it, but there is a sense in which all criticism is constructive criticism, because all criticism implies that there are possibilities other than the present incarnation of whatever is being criticized
questions may not have definitive answers, but the act of searching for the answers still feels worthwhile.
Chris reflecting on his CSS-Tricks article “The Great Divide”:
Since there is too much for any web developer to know, what is the most graceful and professionally acceptable way of not knowing things?
Whatever the answer is, it’s definitely not “ignore, shit on, and downplay the things you don’t know and gatekeep the things you do.”
One of the most persistent myths in software development is the notion of the hero developer—the rockstar…They are legends.
And like most legends, they don’t really exist…
[they are] the Bigfoot of software development. Frequently sighted; rarely seen.
If nature teaches us anything, it’s this: your surrounding environment shapes so much of you, and yet somehow we remain so oblivious to it.
Management loves to do meaningless performance evaluations, pit employees against each other, and talk some developers up and others down, but the hard truth is that they’re the ones responsible for most of the variation in performance from worker to worker.
An in-depth look at an old friend: escaping.
the escaping required in an HTML attribute value is different from the escaping required in an HTML element body, which is in itself different from the escaping required for a query string parameter value inside an URL inside an HTML attribute value
the conceit of context-dependent autoescaping systems is that they can have a full understanding of the semantics of the language they are templating. However, this is not possible because languages such as (X)HTML are user-extensible.
manual escaping is an accident waiting to happen, and autoescaping requires language- and context-dependent analysis of the string being templated, which is liable to be unreliable.
When I think back, my favorite pair-programming sessions were the one where things went wrong. These were the moments that taught me what being a programmer is all about. What do you do when you don’t know what to do? How do you break down a new problem? What tools do you reach for? When do you abandon your current approach? These are things you can’t learn from a blog post.
“What do you do when you don’t know what to do?” I really like that.
why should HTML and CSS ever be respected by people who call themselves “real developers” when almost any code soup results in something consumable? There are hardly any ramifications for coding mistakes, which means that over the years we focused on developer convenience rather than the quality of the end result.
…The web is ripe for attacks because of its lenience in what developers can throw at it.
Going back to the main design principle of the web where the user should get the best result and be our main focus. And not how easy it is to build a huge system in 20 minutes with x lines of code. Just because the resilience of the web means our code does not break doesn’t mean it works.
We yell and scream when we hear about big social media companies doing godawful things with our data and attention, yet we have no issues applying the stupidest marketing tricks to sell some service or product.
…now it's ChatGPT and AI generation era where everyone is trying to sell some AI service that generates something, even if that something has close to zero value...
…What the fuck are we doing here?
“The tyranny of easy metrics”:
Someone asked how this "strategy" of simply letting customers cancel without questions or hassle could be substantiated by data. Like, what measurements, what tests drove us to this place? It's a perfectly fair question in a world saturated with Growth Hacking, Chief Revenue Officers, and endless aspirations for exponential growth. But it's ultimately the wrong way to look at it.
It's easy to quantify the value of these hassling and haggling measures when they somehow manage to save a few customers, even if that is just 0.1% of those subjected. See! We earned an extra $32,856 last year putting everyone who wanted to cancel through the wringer. Yes, but at what cost?
This is the tyranny of easy metrics. It's easy to measure how much money is saved by preventing cancelations, it's much harder to measure how much long-term business is lost by poisoning your reputation with the 99.9% of customers who had to jump hoops and dodge sleazeballs to get out of the subscription...
Culture is what culture does. Culture isn't what you intend it to be. It's not what you hope or aspire for it to be. It's what you do…
And the good news is that culture is really a 50-day moving average. It's not a steady state. It's what you've done recently, what you're doing now, and what happens next. It's both along for the ride, and the ride itself. It's the byproduct of behavior.
Love this line:
The composting of failures produces rich and fertile soil.
What should you blog about? Just blog about anything you find interesting and thought-provoking. And use it to reply to people whose content you find interesting. Blogs are more fun when they’re used to have conversations.
This article is the detailed, technical overview of Mastodon I was looking for.
What I built isn’t an ActivityPub system as much as a Mastodon-compatible one. I think this is the key contradiction of the ActivityPub system: it’s a specification broad enough to encompass many different services, but ends up being too general to be useful by itself.
The contrast of ActivityPub to RSS is pretty stark. Read this and, damn, you gotta love RSS!
- You can implement an RSS feed with basically any system. A static site generated by a static site generator like Jekyll? Sure! You can even write an RSS feed by hand and upload it with FTP if you want.
- Your RSS feed doesn’t know who’s reading it. If you have 1 million people subscribed, sure, that’s fine. At most you’ll need to use caching or a CDN to help the server serve those requests, but they’re just GET requests, the simplest possible kind of internet.
- RSS has obvious points of optimization. If 10,000 people subscribe to my RSS feed but 5,000 of them are using Feedbin, those 5,000 can share the same GET request that Feedbin makes to pull the latest posts.
- An RSS feed reader only needs a list of feed URLs and an XML parser. It doesn’t need to have its own domain name or identity in the system. A feed reader can be a command-line script or a desktop application.
You can’t solve culture with technology.
Just that line alone is brilliantly applicable to our world now.
Want to have an unblockable, unbannable user profile? Buy yourself a domain and get a personal website. Want to have a space where you can say and do whatever the fuck you want? Get a webspace and put up a blog. Do you want to keep up with what other people are doing and saying online? Start using RSS or, and this is gonna sound like a very radical idea, bookmark their websites and every once in a while open them in your browser and see what they're up to. Want to also have discussions? Add comments to your website. Don't care about other people's opinions? Don't add comments to your site.
“State of…” surveys are being used to decide the road maps of major platforms and tools and are driving standardisation work.
Literally, management via popularity contest.
I’m all for hearing folks out, but I also worry about popularity replacing vision. I don’t trust the masses.
folks will accomplish more if you let them do some energizing work, even if that work itself isn’t very important.
Rigid adherence to any prioritization model, even one that’s conceptually correct…will often lead to the right list of priorities but a team that’s got too little energy to make forward progress. It’s not only reasonable to violate correct priorities to energize yourself and your team, modestly violating priorities to energize your team enroute to a broader goal is an open leadership secret. Leadership is getting to the correct place quickly, it’s not necessarily about walking in the straightest line.
The most important lesson I’ve learned as I’ve become a better manager is that there is almost always a correct answer, but applying that answer to your specific situation will always be nuanced and messy. Further, the correct answer is almost always different if you’re taking a short-term or long-term perspective.
Lots of good nuggets in here from Miriam Suzanne:
most lengths on the web are defined in px. Is that because authors intentionally avoid em/rem, or because the mockups that we receive from designers are all limited to px by our design tools?
many CSS-in-JS tools and utility frameworks are more invasive – replacing some or most CSS with a proprietary syntax. They don’t provide additions to the language, but a whole new language that stands directly between us and the basic CSS functionality we need
Once the tools stand between us and the language, we become entirely reliant on tool-builders to determine what features are available.
Suddenly CSS is able to move faster than the ecosystem, and we’re stuck waiting on our tools to catch up with well-supported platform features.
This happens with so many dependency-related slow downs.
When tools intervene between you and your access to the web platform, proceed with caution. Ask not only: How well does it work? But also: How well does it fail? Not only: What features do they provide? But also: What features do they prevent?
Gruber on the App Store ad debacle:
“No ads in the App Store, period” would have been a powerful, appealing message…“We sell ads in the App Store, but they’re OK because they don’t track you” seems to be the message Apple is going for, but that’s neither powerful nor appealing.
Too many companies and brands today settle for mediocre, safe justifications in messaging. Commitment is more powerful than visual design caveats.
Apple is actually scrupulous about labeling paid placements as “ads”, and using different background colors for them. One can certainly argue that ads should be even more clearly demarcated, but if you look for it, it’s always clear. But people don’t look. If the message were clear — that there are no ads or paid placements in the App Store, period — people might learn. But if the message is that there are ads, but not many, but now there are more than there used to be, and but if you look closely you’ll see that the ads have a blue background and a small “ad” label — almost everyone is going to assume that anything that might be an ad is an ad and the whole App Store is pay-for-play all the way down.
If you’re an artist or writer and you’re using DALL-E or GPT-3 to “enhance” your work, or if you’re a programmer saying, “GitHub Co-Pilot makes me a better programmer?”, then how could you possibly know? You’ve disrupted and bypassed your own creative process, which is thoughts -> (optionally words) -> actions -> feedback -> repeat, and instead seeded your canvas with ideas from a machine, the provenance of which you can’t understand, nor can the machine reliably explain. And the more you do this, the more you make your creative processes dependent on said machine, until you must question whether or not you could work at the same level without it.
Nicholas Carr has written in length about these kinds of ideas in his book on automation The Glass Cage.
my hopes for web computing always felt limited by both the inertia of what Chrome already was (it’s hard to move the cheese on people), and by Google itself. A company that once oozed innovation now stood in its way. At some point, so much of our focus became navigating a sea of reasons not to innovate, for fear of causing users to see fewer ads. The ads model is an addictive one! And despite my lofty position at the company, this wasn’t something I could change.
Interesting insight (and admission) from Darin Fisher, co-creator of Google Chrome, on why he’s joining The Browser Company.
As an evergreen advocate of distilling your thinking into writing before starting visual designs, I find it intriguing that many of Ive’s designs start as words before even sketches.
Quotes from Ive in the article:
Language is so powerful. If [I say] I’m going to design a chair, think how dangerous that is. Because you’ve just said chair, you’ve just said no to a thousand ideas.
The most important lessons you would never choose to learn because they are so painful.
I’m not interested in breaking things. We have made a virtue out of destroying everything of value. It’s associated with being successful and selling a company for money. But it’s too easy—in three weeks we could break everything.
Ideas like the shower. Ideas like our pillows. Ideas like commutes. Ideas like walks. Ideas like the morning, or late nights. Ideas like daydreams. Ideas like you doing something else so they can surprise you. ... They aren't something you control — they bubble up, they arise. You don't get to have them when you want. They come to you.
That’s an interesting interview question: how do you generate ideas?
Max describes perfectly my experience with Mastodon:
Ok wait, you wanted to join mastodon, what’s all this now? Tildes? Furries? Some Belgian company? Why do you have to apply? Everyone else had that mastodon.social handle - Can’t you just use that? The real one? What the hell is a fediverse?
Confused, you close the site. This seems like it’s made for someone else. Maybe you’ll stick around on Twitter for a while longer, while it slowly burns down.
There are those who see the web merely as a tool to sell things or to gain influence or otherwise profit, and then there are the "web people" who enjoy the web as a medium of creation, who simply enjoy putting things out there for other people to appreciate.
Should I use emojis in my writing?
What and how you write is more important than how you decorate it.
I agree! The thing is, you probably already know the answer as to whether you should use them.
Do you use your emoji in a meaningful way, beyond measurability, clickability, usability and SEO? Or are you just making noise? Most of the time, you can find the answer without Google Analytics, eye trackers, CT, or lie detectors.
And while this refers to use of emojis, I feel the same way about social images these days:
[If] you do what everyone does, you do not stand out. Design is hard to measure. You can count seconds, clicks, and dollars. Meaning, beauty, love, and trust do not translate well into percentages.
Don’t use emoji when you don’t really have a meaning or purpose for it.
Do not hope the reader will figure out what you haven’t thought through
While using emoji has its place, unfortunately the common case seems to be:
Spread mechanically and without much thought, to add some color to an otherwise dull text, [emojis] just decorate boredom.
Lastly, I love this great point on why we write:
Finding verbal clarity on a subject of which one had only vague feelings, seeing clearly expressed what was only in the back of one’s mind, is one of the chief pleasures of reading good writing.
Even if we shipped more features one quarter than another, I wouldn’t actually believe that our velocity had necessarily gone up, it’s more likely that the features themselves were smaller.
If a state is important enough to indicate visually, it's probably important enough to expose to assistive technologies.
Love this idea of building state into HTML attributes for screen readers and doing your styling there
Instead of:
.button--is-expanded {}
You do:
button[aria-expanded="true"]::before {}
there’s a reason why <article class="card" data-align="left" data-size="small" />
looks attractive — it’s mirroring the APIs we’re used to seeing in design systems and component libraries, but bringing it to vanilla HTML and CSS. Indeed, it’s a small step from data attribute selectors to custom pseudo selectors or prop-based selectors when using Web Components (think <my-card align="left" size="small" />
).
I’m intrigued by this idea and the code samples.
<article
class="card"
data-loading="true"
data-variant="primary"
data-size="large"
data-border="top right"
data-elevation="high"
/>
<style>
.card[data-loading=true] {}
.card[data-variant=primary] {}
.card[data-size=large] {}
/* etc. */
</style>
people are led to view the consequence of the ruthlessness of the machine as a kind of force of nature, and rendered powerless to object. Yet it is not a force of nature; in reality it is a system deliberately designed by humans to advance a ruthless end, without admitting to it.
“The system won’t let me do it.” How many times have you heard that excuse when pleading with another person for some kind of humane understanding and exception to your scenario?
companies realise they can escape from accountability by hiding behind the facelessness of the machine, which disempowers the individual to fight or object to it.
And that’s how we want our money to work?
using cryptography, systems — institutions — can be created which possess absolute integrity, where all past efforts to create such institutions have failed, having been comprised of humans who are infinitely more corruptible…
All transactions are final; none can be reversed. It does not matter if your coins were stolen, or if your family will come to ruin because of it; the system does not care. There can be no exceptions. If we desire a system of absolute integrity, we must accept these outcomes as the cost
Wow! I wasn’t aware of this piece from 1997 by WIRED.
Twenty-five years ago, it was projected that, in an ever-more interconnected world, money would no longer be the prime currency, attention would be. This would reshape social values, and as we became more engrossed in efforts to gain attention, we would shortchange those around us; in other words, the drive for self would come at the expense of concern for others. The projection has played out as prophetic, and the attention economy is here, with its associated societal shifts.
RSS solves many of the same problems [AMP, Facebook Instant astticles, Apple News Formawt] were trying to solve, like out-of-control JavaScript ruining the mobile web…
Guess which format is going to outlast all these proprietary syndication formats. I’d say RSS, which I believe to be true, but really, it’s HTML.
Great point about the longevity of HTML, especially since feeds (XML or JSON ones) are just a wrapper around the content which comes in guess what format? HTML.
A wonderful breakdown. This about sums it up right here:
[Critical CSS:] we tackled the wrong problem.
They ask me to rate my "experience". Thing is, I didn't have an experience. The delivery person just left the package by the mailbox and I grabbed it when I got home.
Maybe we’ve missed the mark.
They're trying to track everything, and asking people to attribute "experience" to things that are mere, routine happenings. Most things just don't need to be rated.
I had this experience the other day when I ran to Walmart to quickly pickup some Planters nuts and upon finishing at self-checkout aisle was asked to rate my “experience”.
I'm seeing this everywhere and I can't help but think it's generating data that's incompatible with the actual situation. Being asked to rate minutia with a 10-point scale, and ascribe depth of an experience to something that's effectively flat and one dimensional, is overshooting the goal.
This one hits right on the nose. The idea of PHP’s incremental, linear dynamicity feels so lost in many of today’s technological abstractions.
Here's the problem: mildly dynamic functionality…is too minor to be worth bringing in a web application framework, and spawning and maintaining persistent processes just to serve a website.
What PHP offered is an increase in this effort proportional to the amount of dynamicity to be added to a website. This is something no framework can do…If you have a static website and you want to make it dynamic and you don't use PHP, pretty much all options available to you imply a massive increase in complexity that must be paid up front before you can do anything at all. You bring in a massive framework for the dynamic equivalent of “Hello World”, and then on top of that (if it's a non-PHP framework, meaning persistent processes are probably involved) the process for deploying your website radically changes and becomes far more complex.
This in turn has led to much of the functionality which might have previously been implemented on the server side on a “mildly dynamic” website being moved to JavaScript.
Client-side JavaScript, that is. The idea of dynamicity in a website became something to be outsourced, both in technology and compute (e.g. “just embed this <script>
”).
PHP remains the only server-side technology really offering a linear increase in effort for a linear increase in dynamicity.
Suppose you are writing an article about some concept and you want to put some special dynamic widget in the middle of the article, which people can play with to gain an understanding of the concept. This is just one article you're writing, one of countless others; you're not going to spin up an application server (and maintain it indefinitely) just for this throwaway toy widget. The only reasonable implementation options are JavaScript and PHP…
Or, suppose a company makes a webpage for looking up products by their model number. If this page were made in 2005, it would probably be a single PHP page. It doesn't need a framework — it's one SELECT query, that's it. If this page were made in 2022, a conundrum will be faced: the company probably chose to use a statically generated website. The total number of products isn't too large, so instead their developers stuff a gigantic JSON file of model numbers for every product made by the company on the website and add some client-side JavaScript to download and query it. This increases download sizes and makes things slower, but at least you didn't have to spin up and maintain a new application server. This example is fictitious but I believe it to be representative.
the ability to just give a piece of code an URL in 30 seconds, without complex deployment tooling, proprietary APIs or vendor lock-in seems to me a lot more valuable for the things I do
In actual supply chains, money is changing hands
Good perspective on the “software supply chain” and the captialization of hobbyist software.
I just want to publish software that I think is neat so that other hobbyists can use and learn from it, and I otherwise want to be left the hell alone. I should be allowed to decide if something I wrote is “done”. The focus on securing the “software supply chain” has made it even more likely that releasing software for others to use will just mean more work for me that I don’t benefit from. I reject the idea that a concept so tenuous can be secured in the first place.
Ged, Anthony, Dave, and Talos immediately got to work on the first bullet, but without a backend server, there was no place to put files and metadata. So we made a Numbers spreadsheet and shared it in Dropbox along with the source images. Our Slack channel for the project was filled with “I’m going in” and “I’m out!” to avoid write conflicts. (S.W.A.T. = Software Write Avoidance Technique)
Love stories of people doing things in ways that might seem unsophisticated or clunky but it doesn’t get in the way of building something cool.
I believe we aren’t nostalgic for the technology, or the aesthetic, or even the open web ethos. What we’re nostalgic for is a time when outsiders were given a chance to do something fun, off to the side and left alone, because mainstream culture had no idea what the hell to do with this thing that was right in front of it.
The entire society behaves like a drug addict. We know we have a problem collectively but we do nothing to actually help ourselves.
User Hostile Experience is what I call the ever growing trend of making websites as annoying as possible for the average user in order to improve some idiotic metric no one cares about. I'm talking about Twitter forcing me to register or log in in order to read tweets. I'm talking about Instagram forcing me to register or log in in order to play a video a second time. I'm talking about websites forcing me to click through endless pages to improve their page views. And the list goes on and on and on
When I want to find a recipe for pizza dough on the web, I would consider myself lucky if I could get ahold of a blog post from someone who cares passionately about the right kind of dough, who maybe ran an artisan pizza kitchen in Naples for the past 30 years or has a background in baking. ‘Dream on’, you think. Well, these people exist on the web and the web is awesome for being an open platform that anyone with a passion can write on. I don't want to find text produced just because someone saw “pizza dough” is a common search phrase and a potential for top result ad money to be extracted. The passion that drives them isn't the pizza dough—that's fine, but it makes their content less relevant to me.
Gruber quoting Louis Anslow:
Creativity loves constraints, but it doesn’t love rules.
A good thing to remember when working on a design system.
In Product Design, we sprinkle a touch of “delight” on key moments—colorful illustrations in our onboarding, confetti for major milestone reached. In reality, it’s the mundane, everyday interactions that need our attention most.
it’s easy to see how everyone else mistakes the bureaucracy around the work for the work itself.
I believe in a cadence for teams to share and demonstrate progress with each other, gather feedback, and progressively iterate. But today’s standard “sprint”? I’ll sign the “I don’t believe in sprints” creed.
That’s what a backlog is; a list of useless tasks that makes people feel better. The next most important thing, the thing right around the corner, is all that matters…Discard everything else. Focus!
For better or worse, this is how I run a lot of my life. If it’s not important enough to rise to the level of my ability to remember it, it’s not important.
Site-speed is nondeterministic. I can reload the exact same page under the exact same network conditions over and over, and I can guarantee I will not get the exact same, say, DOMContentLoaded each time.
Show me a reproducible measurement that results in reproducible data and therefore reproducible conclusions and I’ll hold my tongue on “data”.
Trying to proxy the impact of reducing our CSS from our LCP time leaves us open to a lot of variance and nondeterminism. When we refreshed, perhaps we hit an outlying, huge first-byte time? What if another file on the critical path had dropped out of cache and needed fetching from the network? What if we incurred a DNS lookup this time that we hadn’t the previous time? Working in this manner requires that all things remain equal, and that just isn’t something we can guarantee. We can take reasonable measures (always refresh from a cold cache; throttle to a constant network speed), but we can’t account for everything.
This is why we need to measure what we impact, not what we influence.
Numbers aren’t often what they seem.
Inadvertently capturing too much data—noise—can obscure our view of the progress we’re actually making, and even though we might end up at the desired outcome, it’s always better to be more forensic in assessing the impact of our work.
There’s nothing wrong with a fondness for data. The trouble begins when you begin to favor bad arguments that involve data over good arguments that don’t, or insist that metrics be introduced in realms where data can’t realistically be the foundation of a good argument.
My favorite topic: “data”.
The data scientists in a software organization usually are deployed on a narrow, selected set of problems where statistics translates very directly to increased revenue, and there’s not enough of them around to really make sure that the “data-driven” decisions being made by everyday software teams are being done on a robust statistical basis. If your culture is that every project, every team should be metrics driven, you’d better be hiring a boatload of data scientists.
And then this dagger. I believe it.
The champions of data are always careful to list all the caveats of measurement, but the implicit assertion is that metrics are useful in the common case; it is the exceptional case where measurement is inappropriate. I claim that the exact opposite is true. The common case is that you can’t measure what you want to measure, you can only measure a proxy and in order to meaningfully interpret even that, you either need to run an experiment that you probably don’t have the resources to run, or do statistics that you probably don’t have the expertise to do.
It’s a tricky situation. There’s an entire industry whose marketing and education budgets are dedicated to convincing people of the value of data and how their tool will help you measure, get numbers, and prove certainty amongst your peers/boss.
An overemphasis on data can harm your culture through two different channels. One is the suspension of disbelief. Metrics are important, says your organization, so you just proceed to introduce metrics in areas where they don’t belong and everybody just ignores the fact that they are meaningless. Two is the streetlight effect. Metrics are important, says the organization, so you encourage your engineers to focus disproportionately on improvements that are easy to measure through metrics - i.e. you focus too much on engagement, growth hacks, small, superficial changes that can be A/B tested, vs. sophisticated, more nuanced improvements whose impact is more meaningful but harder or impossible to measure.
Conclusion:
A weak argument founded on poorly-interpreted data is not better than a well-reasoned argument founded on observation and theory.
You don't even think about it and it works out. You try, it doesn't. You try harder, it really doesn't.
In many ways, this feels like the story of my life. If I try, I fail. If I go in not caring, I find success.
Trying too hard narrows the desirable outcomes.
Expectations are the enemy here — they limit the number of great landing spots, and make the idealized one impossibly hard. Relax your expectations, and hundreds of positive possibilities open up.
Got to reading this document from the Internet Architecture Board (IAB) and it’s a good one.
The intro is stellar:
Many who participate in the IETF are most comfortable making what we believe to be purely technical decisions; our process favors technical merit through our well-known mantra of "rough consensus and running code."
Nevertheless, the running code that results from our process (when things work well) inevitably has an impact beyond technical considerations, because the underlying decisions afford some uses while discouraging others. While we believe we are making only technical decisions, in reality, we are defining (in some degree) what is possible on the Internet itself.
This impact has become significant. As the Internet increasingly mediates essential functions in societies, it has unavoidably become profoundly political; it has helped people overthrow governments, revolutionize social orders, swing elections, control populations, collect data about individuals, and reveal secrets. It has created wealth for some individuals and companies while destroying that of others.
All of this raises the question: For whom do we go through the pain of gathering rough consensus and writing running code?
That’s some great self-awareness there.
Merely advancing the measurable success of [a thing] is not an adequate goal; doing so ignores how technology is so often used as a lever to assert power over users, rather than empower them.
Succinct.
the Internet will succeed or fail based upon the actions of its end users, because they are the driving force behind its growth to date. Not prioritizing them jeopardizes the network effect that the Internet relies upon to provide so much value.
A good argument for browser diversity, I think:
User agents act as intermediaries between a service and the end user; rather than downloading an executable program from a service that has arbitrary access into the users' system, the user agent only allows limited access to display content and run code in a sandboxed environment. End users are diverse and the ability of a few user agents to represent individual interests properly is imperfect, but this arrangement is an improvement over the alternative -- the need to trust a website completely with all information on your system to browse it.
Take away:
We should pay particular attention to the kinds of architectures we create and whether they encourage or discourage an Internet that works for end users.
Paul just has a way with words.
configuration is indistinguishable from procrastination
Also loved this note generalized to anyone who ever does anything successful:
But the likely outcome of the [NFT] boom is that some people will cash out at the right time and become convinced that they hold the keys to the universe and will lecture us for the rest of our lives.
And the ending:
I am very surprised that the terminal result of my efforts [to configure every aspect of my digital life] is not some sort of ecstatic communion with the internet, or even with my own computer. The function of my whole big orchestrated, tagged, integrated system was merely to rekindle old ties.
Blogging. Emailing. Tweeting. Coding. Configuring. None of these are about the practice themselves. They’re all about the friends we make along the way :)
The news does not matter. It has little, if any real impact on your life besides what you allow it to have. Like a vampire, The news- whether mainstream, alternative, printed or screen-based- is a parasitic force that will drain you of your energy, happiness and rationality if you welcome it over your threshold and in to your life. The key is to simply never invite it in.
That’s a bold statement to an intriguing article. What’s one to do?
If an event is actually important to your real life, you will find out about it. Such news will find you.
Feels like there are some parallels in here to “keeping up with” or “staying informed on” web dev news. Or, as the author calls it, “the illusion of staying informed”.
So how do you bring about change, then? Well, from my experience you ignore all of the things you cannot control and that have little bearing on your life (again, if there is some news that will actually effect your life you’ll hear about it) your focus narrows to your local environment. To yourself and your family and your street and your neighbourhood. These are things you can influence. And from here your influence ripples outwards, and rather than being trapped by impotent rage and fear and confusion, you see that the reality is that you can make things happen. And this is the only piece of news that matters
To be honest, this is part of why I like following individuals over publications or companies in web dev. I get to see the individual behind the writing — the human whose views are evolving, changing, growing, shrinking, whatever it might be. I walk that path with them through their writing. The sensationalism is missing (from most anyway) and you get to see a rough human whose edges are being chipped away and polished as they move through the world.
Good blogging advice:
Usually one side of your writing will be noticeably more popular than the other, and you will feel tempted to focus to build your audience and improve your signal-to-noise.
That’s your right, but also you would be depriving the world and your future self of the multifaceted insight machine that you are.
If I’m honest, “Dailies” are probably overkill, but I wouldn’t hate it. I would certainly prefer daily demos over vague, ritualistic standup-speak.
I kinda really like the idea of doing daily demos over “ritualistic standup-speak”, although I do wonder how long until we’d turn those into “ritualistic daily demos” and collectively accept each others subpar demos like we do our standups ha.
Frameworks and libraries are like layers,
and these layers accrete.
Every layer has a vector of intention,
pointing toward some idealized value to users,
determined by the author of the layer.
Opinion,
or the difference
between the vectors of intention of two adjacent layers,
always comes at a cost.
Opinion costs compound
and are, directly or indirectly,
shouldered by users.
An intriguing post where the author tries to explain his intuition about whether a framework is “good”.
Every framework and library [takes] the “what is” of the underlying platform (or layer) and [transforms] them to produce the “what should be”: it’s own APIs. For example, jQuery took the dizzying variety of ways to find a DOM element across different browsers back then (the “what is”) and produced the now-ubiquitous “$()” function (the “what should be”).
There are costs to the opinions in frameworks.
The cost is proportional to the degree of opinion. For example, if I decided to build a Javascript framework that completely reimagined UI rendering as graphs or three-dimensional octagonal lattices rather than trees, I would quickly find myself having to reinvent the universe. The resulting artifact will weigh some megabytes and consume some kilowatts, with DOM trees still impishly leaking out of my pristine abstractions here and there, necessitating tooling and various other aids to ensure successful launches of user experiences, built using my awesome framework.
What’s even more alarming is that opinion costs have a tendency to compound. Should developers find my framework appealing, I will see more high-profile sites adopting it, causing more developers to use it, extend on top of it, and so on. The outcome of this compounding loop is that users will find more of their computing resources drawn to chew on my opinions.
Frameworks abstractions should aim to trickle down to the platform. If they don’t, they become more expensive. And that cost compounds the more popular the library diverges from the underlying web platform.
Design of the platform primitives matters, because it establishes the opinion cost structure for the entire developer ecosystem.
This is where so much friction exists, I think, between “components” and “web components”.
The opinion often comes across as treating the underlying platform as hostile, using as little of it as possible — which unfortunately, is sometimes necessary to make something that actually works.
But as Eric once said, every line of you CSS you write is a suggestion to the browser. That’s not how we think about CSS though. We think of CSS like a series of instructions rather than suggestions. Never mind respecting the user’s preferences; one of the first things we do is reset all the user agent’s styles.
“User agent stylesheets” are a fascinating thing to me. It does seem like, as an industry, we don’t think about them as the default styles for the user. Rather, we see them as the absolute bare minimum which we feel obliged to completely disregard and overwrite without any forethought.
Overall this piece is bit of a “growth hack” mindset in terms of writing, but there are some nuggets in there.
Day and night, your content searches the world for people and opportunities.
When you create content, people can access your knowledge without taking your time. You no longer need to sell knowledge by the hour. Your ideas are the most valuable currency in a knowledge-driven economy. Just as an investment account allows your money to grow day and night without your involvement, content does the same with your ideas.
I guess this makes me a rookie:
Trying to build an online audience without an email list is a rookie mistake.
This is part of why making my readingNotes, a list of links from my online readings each month, is useful:
If you publish something every week for a year, you’ll gain tremendous insights into what you should be creating.
Also this:
My best ideas don’t come from flashes of insight. Instead, they emerge from conversations, tweets, observations, feedback, and other forms of low cost, high-speed trial and error.
And this: you’re already doing the work, might as well synthesize it and make it available to others by writing about it.
You’re already processing a large volume of ideas through your everyday experience: with the social media updates you post, the books and articles you read, the emails you send, the conversations you have, and the meetings you attend. By consuming, digesting, and sharing these ideas with peers and colleagues, you’re already building expertise.
If I’m planning a wedding, it’s very helpful to have all wedding things together. This is data first vs app first organization.
I need a t-shirt that says, “I ❤️ files”.
I want files to liberate my data from my own apps and create an ML explosion of activity! Files are at some level a hack, I get that, there are limits but they are an extremely useful and flexible hack. Like the QWERTY keyboard, they are “good enough” for most tasks. Files encapsulate a ‘chunk’ of your work and allow that chunk to be seen, moved, acted on, and accessed by multiple people and more importantly external 3rd party processes.
people were no longer pointed at the complexity of the problems they were trying to solve but the tools they were trying to use
It’s as if we care more about what Google considers to be fast than actual UX.
Gotta get those numbers.
I think time and experience have shown that, whatever the promises of SPAs, the reality has been less convincing
Personally, I still think the OG spa—Gmail for desktop—is still the best spa (unless you count my local day spa, then that one is the winner 🥁).
What about real-world [SPA] websites that aren’t built by JavaScript framework authors?
The important thing, I think, is to remain open-minded, skeptical, and analytical, and to accept that everything in software development has tradeoffs, and none of those tradeoffs are set in stone.
Ever find yourself about to ship something that isn't good enough?
…"We can always come back and fix it up later".
You can, but you won't.
New priorities pull harder than old ones.
Yeah. This is too true.
A lack of quality rarely qualifies as a bug, and it's hard to justify the time, effort, and tradeoffs required to come back with a polishing cloth down the road.
It’s interesting because the quality that comes out of companies who are renowned for it—like say Stripe—never seems like a follow up. Even on their initial launches. They always seem to have their best foot forward. I guess that can be attributed to their discipline for quality? They seem to know what deserves the highest quality and they make the right trade-offs to deliver on it up front, giving the impression to folks like me that everything they do is exceptional.
you can often see what a company values by what they leave unfinished or or unloved.
Is it the notetaking system that’s helping you think more clearly? Or is it the act of writing that forces you to clarify your thoughts?
Is it the complex interlinked web of notes that helps you get new ideas? Or is it all the reading you’re doing to fill that notetaking app bucket?
Is all of this notetaking work making you smarter? Or is it just indirectly forcing you into deliberate, goalless practice?
In taking notes, it’s the journey that matters (the habitual process of taking notes, synthesizing ideas, and re-articulating them) not the destination (a highly-organized and tagged library of notes for recall). Even if I had to throw away every single note I’ve ever taken, I’d still do it because it’s the process—the act of taking notes—that’s the primary value. The artifacts are of secondary value.
However, if my suspicions are correct, then the primary benefit from notetaking comes from regular, deliberate practice. It doesn’t matter if you’re sketching, journaling, collaging, jotting down bullet points, recording a daily video, or just writing. It doesn’t matter if you’re filing it all away with a detailed ontology into a structured database or if you’re dumping it all into a box. It’s the habitual practice—in a way that fits your skill, personality, and practices—that matters.
The technologies are the easy bit. Getting people to re-evaluate their opinions about technologies? That’s the hard part.
Imagine you run a supermarket that offers fresh products only. A marketing message for this business may sound like “Fresh products every day.” Is it catchy? Probably no.
How can we make it look more interesting with the power of copywriting? — “We leave nothing for tomorrow.”
Why is the second option much more interesting and creative? — It makes you think! It creates a micro-conversation inside of your customer’s head: “Why don’t they leave anything for tomorrow? — Because they bring fresh product every day, and what’s left at the end of the day is probably donated to poor people”.
I marvel in jealousy at people who can write purposefully like this.
Leave it to human biology to be non-uniform and resistant to being easily mapped to a format computers want:
This may seem all like a pointless transformation, but there is a good reason for doing all this nonlinear mapping. The human eye is not a simple detector of the power of the incoming light – its response is nonlinear. A two-fold increase in emitted number of photons per second will not be perceived as twice as bright light.
What does it mean?
Linear encoding has the same precision of light intensity everywhere, which accounting for our nonlinear perception ends up having very quickly increasing brightness at the dark end and very slowly decreasing darkness at the bright end.
Great post on design systems. Sometimes you need to make a concession and create something that doesn’t exist and isn’t standardized just so people can get stuff done.
Because this is the true challenge of design systems work: the difference between correct-ness and useful-ness. We could document everything—every disabled button hover state and every possible combination of components—within Figma. We could name them precisely as we do in the front-end. That’s correct-ness. I see a ton of design systems within Figma that are desperately trying to be correct. But if we want our design system to be useful to our team then we need to cut things out. We really don’t need everything in Figma, only what will speed us up as designers.
The fact is:
Stuff changes too much to ever expect 100% correctness within Figma.
What your users want will likely tend towards being useful vs. being correct. It’s classic product design. You want to make what’s correct—what’s logically self consistent. People who use it don’t care, they just want to get stuff done and use a tool to help them.
Commencement speech from Bill Watterson, author of the comic strip Calvin and Hobbes. Lots of insight in here from the life of a creative.
my fondest memories of college are times like these, where things were done out of some inexplicable inner imperative, rather than because the work was demanded.
You may be surprised to find how quickly you start to see your life in terms of other people's expectations
The truth is, most of us discover where we are headed when we arrive. At that time, we turn around and say, yes, this is obviously where I was going all along. It's a good idea to try to enjoy the scenery on the detours, because you'll probably take a few.
Drawing comic strips for five years without pay drove home the point that the fun of cartooning wasn't in the money; it was in the work.
Selling out is usually more a matter of buying in. Sell out, and you're really buying into someone else's system of values, rules and rewards.
The so-called "opportunity" I faced would have meant giving up my individual voice for that of a money-grubbing corporation. It would have meant my purpose in writing was to sell things, not say things. My pride in craft would be sacrificed to the efficiency of mass production and the work of assistants. Authorship would become committee decision. Creativity would become work for pay. Art would turn into commerce. In short, money was supposed to supply all the meaning I'd need.
You'll be told in a hundred ways, some subtle and some not, to keep climbing, and never be satisfied with where you are, who you are, and what you're doing.
In a culture that relentlessly promotes avarice and excess as the good life, a person happy doing his own work is usually considered an eccentric, if not a subversive. Ambition is only understood if it’s to rise to the top of some imaginary ladder of success. Someone who takes an undemanding job because it affords him the time to pursue other interests and activities is considered a flake. A person who abandons a career in order to stay home and raise children is considered not to be living up to his potential-as if a job title and salary are the sole measure of human worth.
We can watch really old movies today — movies that aren’t just years or decades old, but generations old. We can read works of literature that are centuries old. But we can’t play iPhone games that are three years old unless the developers constantly devote time and attention to making sure they keep up with latest SDKs every 2-3 years? Pixar doesn’t have re-render Toy Story every couple of years.
That’s the great thing about the web: if you own your content on your own website 1) you’re not subject to a giant corporation kicking you out (you’re only subject to your own forgetfulness or inability to rent your domain or pay your hosting bill) and 2) websites can have a much longer shelf life than something from a native OS SDK.
The first iPhone app isn’t available on a modern iPhone, but the first website is still accessible from a modern browser—even on an iPhone, 1st or latest generation. Think about that!
Have you felt that feeling? That moment of uncertainty, where you don’t know what the solution will look like. You’ve solved many other problems before this one (and so far they keep paying you), but now you have to reach into the creative ether again to come up with a way to solve this new one.
Yes. I’ve had that
It can be uncomfortable… these moments of uncertainty, where everything is still an amorphous, blurry mist that has yet to come fully into focus.
Slowly, over time, the mist starts to clear up. Suddenly you can see how things are connected. After a chat with a coworker, a deep-dive into the code, or a walk around the block, something clicks, and some of the pieces start falling into place. The mist transforms into an outline, the outline into a conversation, the conversation into a diagram, the diagram into a few pull requests, the pull requests into follow up pull requests, and finally this vague problem has been translated from that amorphous blob into concrete lines of code.
This process is one of the most interesting parts of software development…I’ve gotten used to it and, dare I say, almost started to enjoy it. There’s still that moment of uncertainty, but I have my ways to get through it. I think it’s an important skill to build, this familiarity with the unknown and the uncertainty, and finding ways that are effective for you to get through it can be very empowering.
TBH, I still fear this sometimes. I’ve had moments of facing a big, 6 month design project where I think “will we actually be able to solve this? Will the business go under cause we don’t?” It always seems so big in the moment. But the journey of a thousand miles begins with one step.
Progressive enhancement:
provide at least a working experience to the most users possible, and jazz things up for users whose browsers and devices can support those enhancements…
To me, it’s easy enough to nod along, but in practice we fail all the time
This article does a deep dive on how to do (what I’ll call here) “vanilla” progressive enhancement.
Plug: what’s great about Remix is it gives you a lot of what’s in this article for free but with the modern ergonomics you’re used to in building apps.
I’ve reluctantly come to believe that URIs and email are the durable interface and protocol that will live long past every given platform’s peak adoption...
If you plan to write across decades, you simply must own the interfaces to your content. You should absolutely delve into other platforms as they come and go–they often become extraordinary communities with great distribution–but they’ll never be a durable home.
Agree. This is why I have a hard time, as much as I want to, jumping on these modern email platforms that don’t support standard email protocols (something I wrote about previously).
No free lunch:
What the miners are doing is literally wasting tons of electricity to prove that the record is intact, because anybody who would want to attack it has to waste that similar kind of electricity.
This creates a couple of real imbalances. Either they’re insecure or they’re inefficient, meaning that if you don’t waste a lot of energy, someone can rewrite history cheaply. If you don’t want people to rewrite history, you have to be wasting tons and tons of resources 24/7, 365
Design Thinking education willfully ignores these complexities, preferring to wrap Design into a digestible package, and in so doing establishing it as a simple, reproducible and processional endeavor. This approach dramatically simplifies the highly complex, nuanced, non-linear reality of Design to a grotesque degree.
Spicy! There’s more:
Given the genesis of Design Thinking — emerging as it did from the bowels of international consulting firm IDEO — it’s perhaps no coincidence that these five tidy phases closely mirror the ‘phase billing’ techniques employed by Design consultancies. Each portion of a project proceeds conveniently along pre-agreed paths, with pre-agreed outcomes on pre-agreed schedules. Real Design work is complex, chaotic and messy, Design Thinking is linear, simplistic and procedural.
Maybe too good, too neat, to be true?
The seamless stepping from one phase to the next, wrapping up neatly with a ‘thing to be made’ is disconcerting and reductive, and (as mentioned in my first critique) reflects a phase-billing attitude common in client services industries. Like ‘Agile’, or ‘Scrum’ or any other product development tool, Design Thinking offers some basic organizational logic to a process, but it implies a level of closure which isn’t present in reality. It’s a fallacy of rapidity, of repeatability, of clean outputs and finite solutions.
Who doesn’t love a good, spicy take on something accepted as gospel by a wide swath of people?
I’m talking, of course, about Jared Spool’s take on the Net Promoter Score.
People who believe in NPS believe in something that doesn’t actually do what they want. NPS scores are the equivalent of a daily horoscope. There’s no science here, just faith.
As UX professionals, we probably can’t convince believers that their astrology isn’t real. However, we can avoid the traps and use measures that will deliver more value to the organization.
You might be asking, “What the heck is Net Promotoer Score?” It was supposed to be a way to gauge customers’ feelings towards a business.
In 2003, a marketing consultant named Fred Reichheld lit the business world on fire with the Harvard Business Review article “The One Number You Need To Grow”…He ended the article with “This number is the one number you need to grow. It’s that simple and that profound.”
It turns out, it’s neither simple nor profound.
(Like so many purported Next Big Things™️.)
Spool has so many hot jabs in here and I love it:
[NPS has] all the common requirements of a “useful” business metric:
- It’s easy to measure.
- It produces a number you can track.
- It feels legitimate.
While specifically about NPS, the article is a cautionary tale of leaning into any metric too much. A closer look at how the metric is calculated and you’ll see why someone might say, “Pay no attention to the metric man behind the curtain!”
The article is also a great look at measuring data and asking the right questions:
The best research questions are about past behavior, not future behavior. Asking a study participant Will you try to live a healthy lifestyle? or Are you going to give up sugar? or Will you purchase this product? requires they predict their future behavior. We are more interested in what they’ve done than what they’ll do. We’re interested in actual behavior, not a prediction of behavior.
The sad reality of so many of our metrics is hidden behind the incentives!
If your bonus is tied to an increase in the NPS ratings, offering a $100 incentive is a great way to raise your scores.
The lesson is: be wary of anything that purports to reduce something to a number.
On using rems for media queries:
Suppose a user sets their default text size to 32px, double the standard text size. This means that 50rem will now be equal to 1600px instead of 800px.
By sliding the breakpoint up like this, it means that the user will see the mobile layout until their window is at least 1600px wide. If they're on a laptop, it's very likely they'll see the mobile layout instead of the desktop layout.
At first, I thought this seemed like a bad thing. They're not actually a mobile user, so why would we show them the mobile layout??
I've come to realize, however, that we usually do want to use rems for media queries.
A great post from Josh—as always.
We're so used to thinking of media queries in terms of mobile/tablet/desktop, but I think it's more helpful to think in terms of available space.
A mobile user has less available space than a desktop user, and so we design layouts that are optimized for that amount of space. Similarly, when someone cranks up their default font size, they reduce the amount of available space, and so they should probably receive the same optimizations.
A take that, in my limited experience, reflects most of the reality around accessibility.
The system is, sadly, ableist and almost every website you look at has accessibility conformance and usability issues… it has often frustrated me and made me more cynical than I want to be. You write down the same issues over and over, knowing they are just a few lines of code. I always had to channel my inner developer again, and remember what it can be like. Yes, removing the line outline: none
is trivial, and it's extremely ableist to keep it in, but what can a developer do if the QA and/or design departments flag it as a bug and they're the only one on the team who ‘gets’ this need. Let's not blame the developer, let's blame the ableist system we all operate in.
I want designers to be participants in the research as also every other executive. Again, if you have a standalone research team that is just going off independently doing research and presenting it back, the people who are consuming the research haven't really felt the pain points. It's very, very different to go to an interview or three or four interviews and see the same thing come up again and again, and that bringing some internal insight to the people, the product managers, the designers who are making that decision, than being completely arms length and reading a bunch of decks, which this item becomes just a bullet point item.
Zeeshan Lakhani, an engineering director at BlockFi, Darren Newton, an engineering team lead at Datadog, and David Ashby, a staff engineer at SageSure, all met while working at a company called Arc90. They found that none of them had formal training in computer science, but they all wanted to learn more. All three came from humanities and arts disciplines: Ashby has an English degree with a history minor, Newton went to art school twice, and Lakhani went to film school for undergrad before getting a master’s degree in music and audio engineering
I worked at Arc90 with these folks and this is what I loved about Arc90: the interdisciplinary education outside of computer science and design was off the charts. People from all over the spectrum of education.
are the numbers good? They focus on easy-to-gather quantities and neglect any measure of quality.
I just want to stand and clap at everything in here.
Average time on page; bounce rates; sessions with search; page depth etc. Which of these are important for you to know? And for each metric, what number is a sign of success?
If you want to make something transformative, look where nobody else is looking.
In setting any metric it’s important to benchmark where you are, where you want to get to, and by when. This information prevents panic and helps track progress.
Imagine a maze for a minute. Heading towards your “goal” isn’t going to help. In fact, you have to do the opposite to get there. You have to do something your metrics will tell you is wrong: you have to move in a direction that, when measured, looks like failure. You move away from your goal to get to it. How do you justify that? Not everything is as clear cut as numbers make it seem.
Numbers aren’t intrinsically good or bad. They’re just indicators to help you understand a situation and take a sensible course of action. They aren’t written in stone to be slavishly followed forever.
A good set of meaningful metrics should be personal to your situation. The numbers you track should be one of many inputs, both quantitative and qualitative. What you measure will benefit from regular review and should be changed if the measurements no longer help you chart a course into your desired future.
[people on] Forbes 30 Under […] found their callings at a young age and were able to doggedly pursue them. That is amazing…and rare.
For a lot of us, clarity takes its sweet time […]
If everyone lived from zero to 100 and matured at the same rate, it would be fair to issue sweeping comparisons. But that’s not how it works. We don’t all have the same opportunities. We don’t all take the same paths. We don’t all get the same amount of time.
I love the list of “people who found success after 40 and/or did cool stuff later in life”. Take, for example, Harry Bernstein who published his first memoir at age 96. He wrote two more books and declared: “The 90s were the most productive years of my life.”
In the United States, if you are pregnant over age 35, it’s considered a “geriatric pregnancy.” This is an outdated term — the preferred terminology is “advanced maternal age” — but trust me, the former still makes the rounds.
In France, a pregnancy over age 40 is called a “grossesse tardive” — as a French friend explained, tardive means you’re a bit delayed for something, “like when you’re late for a plane, or late to the party.”
Like the author, I love this recasting of terminology from “you’re old doing this” to “you’re late doing this”.
My good friend recently decided to go back to school in her mid-forties, to pursue a path that always spoke to her, but took a backseat to the more “reasonable” choices she made early in her career. “There’s a part of me that thinks, f*ck, have I wasted the last twenty years?” she laughs. “But the answer is no. I wouldn’t have been as ready for it as I am now. In the end, everything has its time.”
It's hard to overstate just how complex and intertwined [the UA string] is and what astounding amounts of money across the industry have been spent adversarially on ... a string.
Really makes you wonder how divorced from reality our perception of UA string data is from the reality. And the truth is, nobody probably really knows.
The way that we've found that's most effective to get interesting behavior out of AIs is to just pour data into them. This creates a dynamic that is really socially harmful. We're on the point of introducing these Orwellian microphones into everybody's house and all that data is going to be used to train neural networks which will then become better and better at listening to what we want to do.
If you think the road to AI goes through this pathway, then you really want to maximize the amount of data that is collected…It reinforces this idea that we have to collect as much data and do as much surveillance as possible.
I always love Maciej’s take on tech.
AI risk is like string theory for programmers. It’s fun to think about, you build these towers of thought and then you climb up into them and pull the ladder up behind you so you’re disconnected from anything. There's no way to put them to the test short of creating the thing which we have no idea how to do.
The Web’s size and diversity makes client-side “fast enough” impossible to judge.
The webs size and diversity make the assertion “____ enough” misleading in really any circumstance unless you have more context. Nothing is ever “enough” unless you can say “enough compared to ____”.
when fresh pageloads are fast, you can cheat: who cares about reloading when it’s near-instant?
Hitting refresh is the “have you tried turning it on and off again” of the web. And nobody will care to reboot your web page if the cost is negligible.
So many good nuggets in this series.
the thing that lasts longest with our websites is probably the part that we spend the least time thinking about—the markup…
This is the second law of thermodynamics made clear on the web: the entropy of any isolated system always increases and, at some point or another, all that’s left of a website is the markup.
Data pipelines take on an institutional life of their own. It doesn't really help that people speak about “the data driven business” like they’re talking about “the Christ centered life” in these almost religious tones of fervor.
Great counterpoints to the religion of data.
The promise you’re told is that enough data is going to lead you to insight.
I worry the reason we haven't learned from the fiasco of the 60's, the systems analysis, the fetishizing of data, is because after all it's only anecdoteal. There's only the one data point.
Talking about Erroom’s law, which is Moore's law but backwards for the drug industry (the amount of money 2 cents worth of research could've bought you in the 50’s costs you 1 dollar today, and its exponentially increasing in cost).
The basic fact is that a chain-smoking chemist just randomly shooting compounds into mice is a more cost-effective way to discover drugs than an entire genomics data set. That is a bizarre result.
Speaking about this relationship where you measure the world and then make judgements. Then humans enter the world, see what you're modeling and measuring, and adapt to get around your measurements. So you take into account their cheats and then update your rules.
Notice what you've started to do. Instead of just measuring the world, you're now in this adversarial relationship with another human being and you've introduced issues of power and agency and control that weren't there before. You thought you were getting a better idea of what is happening in the reality, but you've actually just introduced an additional layer between yourself and reality. This kind of thing happens over and over again.
Characteristics of the physical world make great behvarious.
In the same way that you would design an icon for mass interpretation, leveraging concepts familiar to the most amount of humans possible, you can do the same with non-visual intuitions we share as humans, such as how objects move through time and space.
Everyone we have a shared understanding, or shared intuition, for how a car moves through the world.
We all know intuitively through experience how objects move through the world and how we can manipulate those objects depending on their movement.
you might notice that I haven't used the word duration. We actually like to avoid using duration when we're describing elastic behaviors, because it reinforces this concept of constant dynamic change. The spring is always moving, and it's ready to move somewhere else.
A great talk.
Everyone at The Browser Company swears there's no Master Plan, or much of a roadmap. What they have is a lot of ideas, a base on which they can develop really quickly, and a deep affinity for prototypes. "You can't just think really hard and design the best web browser," Parrott said. "You have to feel it and put it in front of people and get them to react to it."
Constant prototyping as a strategic advantage: if you have the infrastructure to consistently be trying, iterating on, and delivering new things – along with ever frothing ideas – you open yourself to serendipity and, once something strikes, you’ll have everything in place to deliver it fast and effectively.
[Look at all these different components.] What variety! And that’s ok! This is the reality of enterprise product design at scale. It reflects the nature of parallel roadmaps, design system team resourcing and bandwidth, business priorities, and many more factors.
A great read, and dose of reality, on design systems.
Some organizations seem to hold up the ideal that, once a design system exists, everything in an interface can and should be built with it. Not only is that an unrealistic goal for most enterprises, but it can often be a toxic mindset that anything less than 100% coverage is misuse of a design system at best or utter failure at worst.
We often use the Pareto principle—often known as the “80/20 rule”—to set an actionable target for teams: aim for up to 80% of any given page to be made of design system components and leave room for 20% of the page to be custom. That remaining 20% is where the invention and innovation can happen. One of our recent clients added some anecdotal and complementary motivation to this: they reported that they spent only 20% of their sprint time creating 80% of their pages with the design system, which then freed up 80% of the sprint time to work on the 20% of custom functionality that really made the experience sing. This is exactly the kind of efficiency that design systems should enable!
This can be a hard thing to get people to understand.
We used to suggest 10% as a starting point, with a plan to work up to 80% eventually, likely over the course of a year or two.
My trust in analytics data is at an all-time low.
Great post by Dave. It’s absolutely wild to me the disparity between data sets that, presumably, are measuring the same thing.
If I, or some hypothetical manager, put too much stock into these metrics I could see it causing a firestorm of reprioritization based on bot traffic. We’d be chasing the tail of a…bot somewhere in Germany.
A tremendous read. Deep and thoughtful, as always from Frank. A few excerpts I loved.
First, on the non-commercialness of libraries:
a library is one of the few remaining places that cares more about you than your wallet. It means that a person can be a person there: not a customer, not a user, not an economic agent, not a pair of eyes to monetize, but a citizen and community-member, a reader and a thinker, a mind and—God, I am going to say it—a soul.
The web, or at least part of it, has this ethos in it (love the suggested correlation of “public lands” and “open protocols”):
the web is a boundless and shared estate, and we only later learned how to commercialize it. The commercial endeavors that now dominate our digital experience sit on public land, or, should I say, open protocols.
But the public library web is drown out by the outsized commercial influences:
the web is a marketplace and a commonwealth, so we have both commerce and culture; it’s just that the non-commercial bits of the web get more difficult to see in comparison to the outsized presence of the commercial web and all that caters to it. It’s a visibility problem that’s an inadvertent consequence of values
Honestly, I have’t watched this whole thing. It’s long. But this excerpt aptly describes a problem from which so much software and technology suffers.
Cryptocurrency does nothing to address 99% of the problems with the banking industry because those problems are patterns of human behavior. They’re incentives, they’re social structures, they’re modalities. The problem is what people are doing to others, not that the building they’re doing it in has the word “Bank” on the outside.
I first saw Dave tweet about this article:
I noticed a shift some years ago at meetups where the question shifted from "what do you do?" to "where have you worked?" and Twitter bios became micro-résumés.
I remember feeling a bit deflated by it. Not just because I work for a small 3 person company and not a megazord FAANG company, but because it made every introduction feel like a transaction to assess someone's value. Like I could feel people mentally drawing dollar signs on me.
And the article is a good ‘un.
I’m noticing that more and more people on LinkedIn and Twitter are replacing their profile synopsis with a simple list of previous companies where they have worked…Instead of a thoughtful introduction, profiles now sport a list of previous employers, like a race car driver’s uniform covered in logos.
The problem with this seemingly clever use of limited character counts is that it reduces the value of people against the brands of the companies where they worked.
Am I supposed to be swept away in your brilliance because you collected a paycheck at these places?
I always enjoy these kinds of takes—“things I’ve learned over two decades of doing this”.
Pick the right tool for the job or you’ll have to find the right job for the tool you got.
Respect people more than code.
Don’t attach your identity to your code. Don’t attach anyone’s identity to their code. Realize that people are separate from the artifacts they produce.
Don’t do speculative programming. Only make the code extensible if it is a validated assumption that it’ll be extended. Chances are by the time it gets extended, the problem definition looks different from when you wrote the code.
Worth remembering:
HTTP/1.1 204 No Content
Cache-Control: max-age=999999999,immutable
This is the fastest web page. You may not like it, but this is what peak performance looks like.
That may seem unhelpful — of course a useful page is slower than literally nothing! — but anything added to a frontend can only slow it down. The further something pushes you from the Web’s natural speed, the more work needed to claw it back.
One big step towards becoming a tech lead is to use your experience to help people grow. Not to let your horrible memories taint possible great new things to come
Resist the temptation to use images. If you are unable to distill the concepts or thoughts into language, then you have likely not fully understood the problem. Use illustrations as supplementary, not instead of information.
That's it. You should find that you soon will spend a lot more time on defining your thoughts and putting the important information forward rather than fiddling with font faces, sizes and colors.
I like this idea of trying to make images supportive not essential, like a progressive enhancement of narrative: if the images aren’t there, you can still understand what’s being explained in the words.
I’m not saying get rid of images, but there’s an art to this kind of communication. As an example, I like what Gruber does on Daring Fireball where images are often supplementary and therefore hyperlinked via the text that describes them (rather than displayed directly inline).
I’m not sure I’m ready to go all in on that, but there’s something appealing about it to me personally.
We’ve been led into a culture that has been engineered to leave us tired, hungry for indulgence, willing to pay a lot for convenience and entertainment, and most importantly, vaguely dissatisfied with our lives so that we continue wanting things we don’t have. We buy so much because it always seems like something is still missing.
The working world also largely disincentivizes broadcasting weakness, which is a huge loss given failure represents an opportunity
Lovely post from Eric. Spot on in defining a problem area, and specific in recommending solutions.
Modeling your notion of what success is from what others publicly share doesn’t grant you the vital context of why they made the decisions they did, and what constrains they had to work with.
I can see this practice being useful, even for only myself in a non-work context like blogging, but the work context is probably even more transformative.
At the end of the day, we all still have to produce value for the place that employs us, with value being highly dependent on the organization’s priorities.
[for many] I only exist when someone takes pity on me and links to my blog from Twitter, Reddit, Hacker News, or a big site like CSS Tricks...
For those people who are re-sharing my content on social media, I suspect most of them found it from their RSS feed. So RSS definitely still seems alive and well, even if it’s just a small upstream tributary for the roaring downstream river of Twitter, Reddit, etc
The best thing I’ve ever done in my career is blog about my specific problems with browsers (or any software you’re passionate about).
I’ve seen this too. Not necessarily in the same way but if nothing else as a coping mechanism. Write it out. You’ll feel better when it’s over. And others may read it and feel better too—“hey I’m not alone, someone else feels that way too”—and sometimes the train stops there. You don’t always need an outcome from a pressure point.
A single blog post is worth 10,000 tweets. It’s valuable because it shows you thought through your problem and narrowed it down to a set of specific issues.
A short talk worth watching. Shows how we are the ones driving innovation because we need to make pragmatic choices in the things we build today.
The Abstraction Fallacy
Making a messy model a bit cleaner increases utility radically.
An infinitely pure model must therefore be infinitely useful.
Actually
Optimal representations are pragmatic.
They're only useful for a specific set of situations.
Eric with a shakedown of comic sans jokes:
Even though I put a lot of effort into selecting typefaces, I’m not precious about it. If someone changes the typeface, its font size, line height, letter spacing, and color to meet their needs, I’m delighted! It means that they’re interested enough in the content to expend effort to make it legible.
Why do you care so much about setting and overriding (see: dictating) someone else’s preferences?
The thing is, you can’t know what works for someone’s access needs, but you can provide mechanisms for them to help themselves, and that’s totally fine.
I want to start by just pointing out that what we're trying to do here is kind of crazy. We want to:
- Download code
- from the internet
- written by unknown individuals
- that we haven't read
- that we execute
- with full permissions
- on our laptops and servers
- where we keep our most important data
This is what we're doing every day when we use npm install.
Well, when you put it that way…
I suspect that the prominence of Markdown has held back innovation and progress for digital content.
Ah, ok ok. As a lover of markdown, I’m here to see how my entire world might be upended. Lay it on me:
does [git + markdown] really represent the best workflow for people who are primarily working with content? Isn’t this a case where developer experience has trumped editor experience…?
Embedding specific presentation concerns in your content has increasingly become a liability and something that will get in the way of adapting, iterating, and moving quickly with your content. It locks it down in ways that are much more subtle than having content in a database.
I can agree with some of the sentiments in this article on a certain level.
But there’s another plane of understanding here where I could argue that digital content is cheapened by the promise of quick economical benefit. Much of the content on the web is prepackaged filler, not meant to sustain but merely fill.
What makes markdown compelling, to me, is less about the syntax and more about the focus on the content. Write good, interesting, compelling content and people will read it. You have to do that when all you have is, in essence, plain text.
That said, I can also get behind this idea:
I wish we could direct more energy into making accessible and delightful editorial experiences that produces modern portable content formats.
The hard truth is this; your Figma docs should be treated like a sketch on the back of a napkin. It should be somewhat accurate but it’s a tool that reflects the front-end, but: it ain’t your design system.
software projects require an enormous amount of human effort. Even relatively simple apps require a group of people to sit in front of a computer for eight hours a day, every day, forever
I’m late to the party on this article, but everything in this piece is [chef’s kiss].
It’s one thing to take a critical eye to things like personas, but it’s another to question the larger structures that facilitate and reinforce their use.
How have our histories and practices affected the way culture is manufactured? What decisions are being made for others without their input, or even awareness of their existence? What power structures inform our notions of frameworks, categorization, and cognition?
Representation is important, but it is also an output of a larger system.
The problem is that a DAO is not an employer or a legally binding contract. The DAO voting to do things has no legal weight. Even if you create a legal corporation to do the bidding of the DAO this doesn't get you out of the problem, because by law the people ultimately in charge must be a named set of human beings. Making it possible for software to employ humans as an independent legal entity is another neat idea but one that definitely does not exist right now.
That’s some pretty wild dystopian stuff if you think about it. Are we trying to make robots our overlords. And for what?
Why does playing a game, or making music, or watching a movie, or sending a message, or any of the millions of other things we spend time doing on the web make more sense or improve if modeled as a currency?
I don’t know. That’s a good question.
that’s how “keeping up” stops being a chore and becomes an interest-driven research activity that feeds your enthusiasm instead of draining it.
Lots of good stuff in here, including how I used to feel when I first started: I tried to read every single article smashing magazine put out. And 10 other publications. Realized I couldn’t.
Now I’m better at following blogs I want to, and letting their discourse spur ideas and reflections that I can write on my own. Writing engenders new ideas in my head while solidifying or breaking down whatever my current thinking is. A blog is a great notebook for synthesizing all the research you do as a web worker.
The reason we talk so much about authenticity now is because authenticity is no longer available to us. At best, we simulate authenticity: we imbue our deep fakeness with the qualities that people associate with the authentic. We assemble a self that fits the pattern of authenticity, and the ever-present audience applauds the pattern as “authentic.” The likes roll in, the views accumulate. Our production is validated.
if you’re a writer, even a very talented and hardworking writer, writing must be its own reward, or you’re going to have a rough time.
Special effect advisor Douglas Trumbull speaking about his “practical-first mindset” in filming:
I always try to find an organic — or analog — solution instead of the knee-jerk reaction to use computer graphics. The reason for this is: every time I try this, I get some delightful result that is, in some respects, unexpected. There are magical things that happen in nature — gravity, fluids, lighting — that one couldn’t really design using computer graphics.
I love this idea of going “al naturel” and, through that choice, finding something unexpected—contrast that with a synthetic environment like digital and it’s harder to get spontaneous effects you couldn’t have anticipated. Here’s Trumbull again:
the ‘burbling’ effect [of water washing over hot metal is] a very difficult thing to do with computer graphics because it’s in the realm of fluid dynamics which are very hard to calculate. They’re some of the most challenging elements of computer graphics to execute and you can wait days and days for some frames to render. Whereas, if you’re on a set and you have REAL hot, molten metals and super cold water interacting with this, you’re almost CERTAINLY going to get some surprising visual effect which — on camera — will look really great, particularly if it’s shot at 5000 frames a second.
Fun story about the history of the blinking cursor. I particularly enjoyed this note:
MacDorman tells Inverse. “Much of good HCI [Human computer interaction] design is about the interface letting the user work effectively. It’s not really designed to make the user feel anything, except perhaps in control. Good HCI design lets the user concentrate on the work, not the interface… They are working in the moment without self-consciousness. Their sense of time, place, and self dissolve.”
Good UI design isn’t about making people feel something, it’s about helping them accomplish something (which can, in turn, brings some good feels). Empowerment through UI, not tawdry thrills.
The spread of ever more realistic deep fakes will make it even more likely that people will be taken in by fake news and other lies. The havoc of the last few years is probably just the first act of a long misinformation crisis. Eventually, though, we’ll all begin to take deep fakes for granted. We’ll come to take it as a given that we can’t believe our eyes. At that point, deep fakes will start to have a very different and perhaps even more pernicious effect. They’ll amplify not our gullibility but our skepticism. As we lose trust in the information we receive, we’ll begin, in Giansiracusa’s words, to “doubt reality itself.” We’ll go from a world where our bias was to take everything as evidence — the world Sontag described — to one where our bias is to take nothing as evidence.
The question is, what happens to “the truth” — the quotation marks seem mandatory now — when all evidence is suspect?
Really, if you’re not following Nicholas Carr’s writing, you should.
When all the evidence presented to our senses is unreal, then strangeness becomes a criterion of truth. The more uncanny the story, the more attractive and convincing it becomes.
If we build our own MP3 music libraries, we're unaffected when a label or artist has a disagreement with one of the streaming services.
Drama around Spotify aside, I’ve been burned a few times by streaming services losing access to (usually obscure) albums I enjoy. In the same spirit as “don’t publish your stuff to Medium, own your content” I’ve been wanting more and more to own my music. Perhaps it’s just because collecting, curating, and owning a music library was such a formative part of my teenage years and into my twenties. Mike’s piece really has me thinking about giving up streaming services and going back to ownership…
[treat] your "to read" pile like a river (a stream that flows past you, and from which you pluck a few choice items, here and there) instead of a bucket (which demands that you empty it). After all, you presumably don't feel overwhelmed by all the unread books in the British Library
Thanks for stopping by and reading this site. If you didn’t, I’d be out of a job around here, and I quite like this job so I owe it all to you.
Chris has such a down-to-earth tone of writing. It’s what I love about his writing (and podcasting).
you can actually stand out of the crowd by simply treating the web platform as what it is: a way to deliver content to people.
The entire thing is worth a read. Also love the quote in there from @TerribleMia
Large companies find HTML & CSS frustrating “at scale” because the web is a fundamentally anti-capitalist mashup art experiment, designed to give consumers all the power.
there is a lot more gamification and “growth hacking” at play than publishing good content and hoping for an audience.
We don’t create content for the web and for longevity. We create content to show ads around it.
There’s plenty of write-ups on GitHub about how to start a new open source project, or how to add tooling, but almost no information or best practices on how to maintain a project over years. I think there’s a big education gap and opportunity here.
This is so true! True of web dev in general too. We’re bombarded with headlines that read “How to setup tool X” but almost zero headlines that read “How to setup, maintain, update, and continually re-evaluate tool X over time”.
You need to get comfortable with the notion that the design system is always eventually wrong. Listen to the needs of the teams you support, help them get to good results faster, and be prepared to be proven wrong.
This is why I find design systems so difficult. They’re always wrong. You’ve never nailed it. I suppose this is akin to all software, so it shouldn’t be surprising. But not being surprising doesn’t mean it’s less difficult to accept. It’s like product work: you’re constantly learning, refining, refactoring, releasing, etc.
in the minds of your customers—and the teams within your organisation—the design system already exists. It’s whatever the product is made up of, regardless of whether there’s a team actually trying to make it more cohesive or consistent. Reflect this existing world, reducing redundancies and simplifying complexity, and you’ll have a design system in no time.
Some good advice in here that resonated with my own experience.
If you really believe that software is subservient to the outcome, you’ll be ready to really find “the right tool for the job” which might not be software at all.
There is no “right” architecture, you’ll never pay down all of your technical debt, you’ll never design the perfect interface, your tests will always be too slow. This isn’t an excuse to never make things better, but instead a way to give you perspective. Worry less about elegance and perfection; instead strive for continuous improvement and creating a livable system that your team enjoys working in and sustainably delivers value.
People talk about innovation a whole lot, but what they are usually looking for is cheap wins and novelty. If you truly innovate, and change the way that people have to do things, expect mostly negative feedback.
I loved this piece.
The belief seems to be that if they just keep testing, they will find the answer, and build the business of their dreams.
Most of them are wrong. Many of their businesses would be better off if they didn’t run any A/B tests at all.
The author ran an A/B test on identical emails and found “statistically significant” differences. An increase in opens by 10%! But wait:
to a trained statistician, there is nothing remarkable about these “results.” Given the baseline conversion rate on opens, the sample size simply isn’t large enough to get a reliable result. What’s happening here is just the silly tricks our feeble human minds play on us when we try to measure things.
It’s very possible we are making wrong decisions based on false interpretations of information. Just look at these results from an A/A test:
A 9% increase in opens!
A 20% increase in clicks!
A 51% lower unsubscribe rate!
Finally, an incredible 300% increase in clicks, all by simply doing absolutely nothing!
…to an experienced eye, it’s clear that none of these “tests” have a large enough sample size (when taking to account the baseline conversion rate) to be significant.
The fact is, in so many cases where data is tracked, interpreted, and used to drive decisions, statistics isn’t the core competency of those involved.
To run a test that asks an important question, that uses a large enough sample size to come to a reliable conclusion, and that can do so amidst a minefield of different ways to be lead astray, takes a lot of resources.
You have to design the test, implement the technology, and come up with the various options. If you’re running a lean organization, there are few cases where this is worth the effort.
Running experiments and creating a vision are two different kinds of tasks. It’s possible you lessen your ability to make intuitive insights when you’re in the statistical weeds. Don’t give up on your vision so easily based on “results”.
Our world needs…vision, and if [we’re] busy second-guessing and testing everything (and often making the incorrect decisions based upon these tests), that’s a sad thing
And the author quotes Eric Ries from The Lean Startup:
Science came to stand for the victory of routine work over creative work, mechanization over humanity, and plans over agility.
Some good stuff in here. First “crumegeony” stuff:
Everybody has small screens, and they all know how to scroll: only make UI widgets ‘sticky’ or ‘fixed’ if you have to. They know where your navigation bar is. You don’t have to push it in their face the whole time.
web dev is a pop culture with no regard for history, dooming each successive generation to repeat the blunders of the old, in a cycle of garbage software, wrapped in ever-escalating useless animations, transitions, and framework rewrites.
Next: naming things is important:
Naming things is fantastic. Everything on the screen should have a name. It’s better for your work. It’s better for accessibility. It’s better for your design. Take a table view and name it ‘Inbox’, ‘Screener’, or ‘Paper Trail’, and they suddenly mean something. What you do with them has changed. A good name transforms design and action.
Last: I liked this metaphor for gardening.
The term ‘project’ is a poor metaphor for the horticultural activity that is software development.
Some software is seasonal and has crops, but unless you want your business to end with the first harvest, you need to treat it like a living ecosystem.
Some software components are perennial and evergreen. Others are seasonal and need regular replanting. The project metaphor treats them both the same and increases the risk of code rot.
It me:
Becoming a professional software developer is accumulating a back-catalogue of regrets and mistakes. You learn nothing from success. It is not that you know what good code looks like, but the scars of bad code are fresh in your mind.
This little bit about working with components is great. It’s why we’ve gravitated to the component model on the web: not for the reuse of the components, but for the isolation of them.
Instead of breaking code into parts with common functionality, we break code apart by what it does not share with the rest. We isolate the most frustrating parts to write, maintain, or delete away from each other.
We are not building modules around being able to re-use them, but being able to change them.
And later on the same idea:
It is not so much you are building modules to re-use, but isolating components for change. Handling change is not just developing new features but getting rid of old ones too.
It’s not about writing good software, but writing software that can easily change over time. That is good software. As the author ends:
Good code isn’t about getting it right the first time. Good code is just legacy code that doesn’t get in the way.
This was written in 2012.
we have sacrificed conversation for mere connection.
We’ve become accustomed to a new way of being “alone together.”
Human relationships are rich; they’re messy and demanding. We have learned the habit of cleaning them up with technology. And the move from conversation to connection is part of this. But it’s a process in which we shortchange ourselves. Worse, it seems that over time we stop caring, we forget that there is a difference.
We expect more from technology and less from one another
We think constant connection will make us feel less lonely. The opposite is true. If we are unable to be alone, we are far more likely to be lonely.
This feels relevant to software, not just marketing. We are addicted to (bad) data:
We've got click rates, impressions, conversion rates, open rates, ROAS, pageviews, bounces rates, ROI, CPM, CPC, impression share, average position, sessions, channels, landing pages, KPI after never ending KPI.
That'd be fine if all this shit meant something and we knew how to interpret it. But it doesn't and we don't.
How did we get here?
I get it. Having tangible data allows us to demonstrate that we're doing our job and we're trying to measure and improve what we're doing...
The numbers are often all we have to prove our case, to get more budget and in extreme cases, to continue to stay employed. We'll remain in this mess until we can separate [the work] from short sighted and poorly informed decision making. Until leaders can lead on the strength of their conviction and experience instead of second guessing themselves and their staff based on the inadequacy of data.
Preach! I feel this so much.
Reminds me of my thread of similar thoughts on twitter:
If a tree falls in the woods, and its fall was not measured, did it really happen?
If someone visits your website, but you don’t have analytics in place, did they really visit?
If your spouse loves you, but you don’t measure it, is that love real?
Then this comment on the article:
Tech’s contempt for human expertise is bizarre, given that it’s what we do all day.
Never in the history of human kind have so many done so much nothing and tried to make a living at it. And I’m not even talking about Wall Street.
What a great start! An interesting commentary I found linked in a critique of web3.0.
Most of you watching this have real jobs, not make believe “influencer” jobs.
The economy of nothing wouldn't exist if not for hyper consumption…People seem to be desperate to be entertained constantly.
As a UI designer, I found his finger pointing at YouTube’s UI interesting as well.
YouTube is a harsh task master. Sure the analytics page might congratulate you on a successful video. But if you don’t keep it up, it will depress you with comments like [shows screenshot from the YouTube UI]: “Your numbers are down because you didn’t publish enough this week.”
Do you creators take a few days off or, heaven help you, go on vacation? Hell no! You could drop subscribers! Are you nuts? So I would blame YouTube as the reason for YouTubers producing so much “nothing” content. YouTube insists a break is good for you, but their analytics page tells you they are liars.
A very relatable rundown of what it’s like working on a side project:
- I’ll start this fun thing,
- But first I need X.
- Oh, X is a little out of date, I’ll update it quick…
- Oh, X depends on Y which isn’t really needed, I’ll tear that out real quick…
- Oh, Y is…
As is so often the case with CSS, I think new features like [logical properties] are easier to pick up if you’re new to the language. I had to unlearn using floats for layout and instead learn flexbox and grid. Someone learning layout from scatch can go straight to flexbox and grid without having to ditch the cognitive baggage of floats. Similarly, it’s going to take time for me to shed the baggage of directional properties and truly grok logical properties, but someone new to CSS can go straight to logical properties without passing through the directional stage.
I found this a perceptive articulation of a feeling I know I’ve had many times: unlearning to make room for the new is hard. For example, you get really good working with a claw hammer. Then somebody says “hey, we have a sledge hammer now!” All your tips and tricks for doing a sledge hammer’s job with a claw hammer are now obsolete. You’re now on a level playing field with folks who just started and have both the claw and the sledge hammer in their tool belt. Meanwhile, you’re over here trying to learn when to use the new sledge but also when to keep using your claw.
Now just think about how often web technologies change in contrast to hammer technology and you can see how overwhelming that can feel at times.
by regularly writing and regularly reading what I've written and repeating over and over, I've found that my written skills, spelling and grammar have improved, albeit by rote.
A relatable post on the process of writing and blogging.
Also: I love the term ”draft purgatory”.
Don’t dot every I and cross every T, don’t tie up every loose end. Leave some questions unanswered. A piece of art, a movie, a song, a performance, they all tend to be more compelling when they leave you wondering.
Yeah, I know. This documentary is old news. But I finally got around to watching it and it was better than expected.
First, I was introduced to Jaron Lanier and now I’m already reading one of his books:
One of the ways I try to get people to understand just how wrong feeds from places like Facebook are is to think about Wikipedia. When you go to a page, you’re seeing the same thing as other people. So it’s one of the few things online that we at least hold in common.
Now just imagine for a second that Wikipedia said, “We’re gonna give each person a different customized definition, and we’re gonna be paid by people for that.” So, Wikipedia would be spying on you. Wikipedia would calculate, “What’s the thing I can do to get this person to change a little bit on behalf of some commercial interest?” Right? And then it would change the entry.
Can you imagine that? Well, you should be able to, because that’s exactly what’s happening on Facebook. It’s exactly what’s happening in your YouTube feed.
Later, Justin:
And then you look over at the other side [of an argument], and you start to think, “How can those people be so stupid? Look at all of this information that I’m constantly seeing. How are they not seeing that same information?” And the answer is, “They’re not seeing that same information.”
People think the algorithm is trained to give you want you want. It’s not. It’s trained to keep your attention and serve you content—content that is from the highest bidder, content that the highest bidder hopes will modify your behavior towards conformity with what they want to see happen in the world.
As noted in the show, AI doesn’t have to overcome the strength of humans to conquer us. It just has to overcome our weaknesses.
[Google] doesn't have a proxy for truth that’s better than a click. – Cathy O'Neil
Facebook, it’s now widely accepted, has been a calamity for the world. The obvious solution, most people would agree, is to get rid of Facebook. Mark Zuckerberg has a different idea: Get rid of the world.
What a way to start an article.
It’s to turn reality itself into a product. In the metaverse, nothing happens that is not computable.
Always love Nicholas’ insight. He hasn’t been posting much to his blog as of late, but Facebook’s announcement of the metaverse seems to have brought him back into the light of the blogging day. I’m happy for it.
Maciej talks about his youth and how, whenever he did something bad, he was threatened with: “this will end up on your permanent record.” Then he learned there was no such thing as a “permanent record”. But, switching gears to the internet we’re building, he now says:
How depressing to grow up and be part of the generation that implements the god damn permanent record for all of us?
In programmer folklore, we obsess over data loss. We got PTSD from losing data once that now we’re dead set on capturing and retaining everything forever. We haven’t arrived at a point in time where we fear but data gathering and retention over data loss. Rather than having a little foresight, it seems we're going to capture and retain everything and then learn from experience why that’s a bad idea.
It's almost as if we [we got burned] by the fact that computers only understood binary and couldn’t really understand floating point, that [instead of fixing it] we just decided “we’re going to use integers from now on as a society because the computers don’t let us do otherwise.”
This is funny, but also wise—I think? To be honest, it’s sort of my approach to blogging.
If I try to solve some a problem by doing what everyone else is doing and go looking for problems where everyone else is looking, if I want to do something valuable, I'll have to do better than a lot of people, maybe even better than everybody else if the problem is really hard. If the problem is considered trendy, a lot of very smart and hardworking people will be treading the same ground and doing better than that is very difficult. But I have a dumb thought, one that's too stupid sounding for anyone else to try, I don't necessarily have to be particularly smart or talented or hardworking to come up with valuable solutions. Often, the dumb solution is something any idiot could've come up with and the reason the problem hasn't been solved is because no one was willing to think the dumb thought until an idiot like me looked at the problem.
- No one came back from YouTube feeling fresh and energized.
- No one peeled out motivated and happy after two hours of scrolling through Instagram.
- No one ever got inspired to finish things up after a Netflix Bonanza.
And then this, which is what I’ve tried to voice to people who tell me they loved the book “Atomic Habits” (which I didn’t because it felt like an argument for body/mind hacking, i.e. “you can’t change, you gotta trick yourself into doing good things”):
Yes, your body is constantly playing tricks on you. Yes, it fools you into believing what isn’t the case. It blinds you from seeing what is. And it directs you to get fast rewards.
Your reward system is getting hacked, and you are being made a slave of yourself…Attributing your delays to some unalterable biochemical processes and giving them scientific names will make your delaying seem scientifically inevitable.
Yeah, but really, a science is one perspective on the world. Biology, physics, chemistry…each is just one way to look at and describe reality. Don’t let your inner neuroscientist discourage the unmeasurable, unweighable, uncountable free philosophical self inside you.
You are more than the sum of your cells.
When the app you’re using opens an in-app browser window and A) the app has dark mode turned on, but B) the OS has dark mode turned off, what does the browser show? More specifically, what is the result of (prefers-color-scheme)
in that scenario?
I hadn’t thought of user agent preferences cascading like this. Fascinating.
Our automated lives are founded on the idea of automation freeing us up to do more pleasurable things, but the truth is it usually just frees us from a particular chunk of work to spend more time doing another chunk of work, all in order to keep up with those who are automating their lives. It’s the automation version of keeping up with the Joneses.
Attempts by companies like Google or Freshly to create services that save you time misfire, as millennials see them not as services that will give them more time to relax, but as services that will increase the amount of time they’re available to work.
An interesting visual metaphor:
The escalators I take to work are filled with the same desperate faces and vacant eyes I feel staring through me on the subway, except instead of standing still, they’re bounding up it, subconsciously aware that below their feet is yet another opportunity to optimize on an existing convenience. This, if anything, is a symptom of our current moment: People ignoring the luxury of a moving staircase in favor of whatever sprinting up it can transport them to faster.
This feels especially true:
in the modern era, the best way to spend your time is finding better ways to spend your time.
We are what’s being optimized:
Optimization begets optimization and says we’re its beneficiaries, and in many ways, we are. But given our reliable ignorance of what our lives have conditioned us to do with free time (read: optimize and work harder), we’re better characterized as optimization’s subjects, along for the ride as our pace of life accelerates endlessly.
The ending:
The acceleration of our collective pace of life is not a result of stupidity or irrationality; rather, it is a symptom of what is perfectly predicted by the prisoner’s dilemma at a global scale: Hyper-rational individuals making hyper-rational decisions on how to spend their time by launching into an inescapable arms race of productivity. Burnout is inevitable.
Why harboring with “feudal” internet companies (like Lord Google) for aspects of your digital life can be bad:
This is a dilemma of the feudal internet. We go to these protectors because they can offer us more security, but their business model is to make us more vulnerable by getting us to surrender more of the details of our lives to their servers and to put more faith in the algorithms that they train, and then which train us in return to behave in certain ways.
As always, Maciej is funny:
I'm not convinced that a civilization that is struggling to cure male pattern baldness is capable of [solving the problem of death]. If we're going to worry about existential risks, I propose we address the two existential risks that already exist and we know for a fact are real, which are global nuclear war and climate change.
But these real and difficult problems are messy. Tech culture prefers to pick more difficult, abstract problems that haven't been sullied by any sort of contact with reality. They worry about how to give Mars an Earth-like climate rather than how to give Earth and Earth-like climate. They debate how to make a morally-benevolent, God-like artificial intelligence rather than figuring out how to make ethical guardrails around the real artificial intelligence that they're already deploying across the world.
We’ve prioritized violence in games, in part, because it’s easy...a violent game is more socially acceptable than an intimate game. You can sell, market, and stream the violent game. Much tooling exists for creating violence faster. But making compelling intimacy…not so much.
This is a thought provoking article. Why is modern web design, despite the feeling of being overly complicated, the way it is?
Is it because that’s what we’ve optimized for? The brightest minds, the tooling, the conferences, the open source frameworks, the blog posts, it’s all in support of the mainstream—however complex. Trying to do anything outside of the mainstream is hard because you have little to no support. The companies, the publications, the people, many have all optimized their resources, tooling, and attention for the mainstream conventions.
metrics are very useful for measuring design’s benefit to the business, they’re not really cut out for measuring user experience.
what’s good for user experience is good for business. But it’s a short step from making that equivalency to flipping the equation: what’s good for the business must, by definition, be good user experience. That’s where things get dicey.
there’s a danger to focusing purely on user experience. That focus can be used as a way of avoiding responsibility for the larger business goals.
I think this is good advice:
A rule of thumb is that the importance of a blog in your feed reader is inversely proportional to their posting cadence. Prioritise the blogs that post only once a month or every couple of weeks over those that post every day or multiple times a day. You don’t want to miss a new Ahmad Shadeed post but there’s no harm in skipping the CSS-Tricks firehose for a few days. Building up a large library of sporadically updated blogs is much more useful and much easier to keep up with than trying to keep up with a handful of aggregation sites every day.
I used to subscribe to many web-related publications, never fully keeping up with their firehouse of a publishing schedule. But they did expose me to individual writers who I’ve harbored in my RSS feed for years.
A complex system that works is invariably found to have evolved from a simple system that worked. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over with a working simple system. – John Gall (Systemantics: How Systems Really Work and How They Fail)
The author’s mathematical notation of how the foundational technologies of the web fit together struck me as interesting:
- URLs + HTTP + HTML = web
- URLs + HTTP + RSS = Podcasts
- URLs + HTTP + JSON = REST APIs
- URLs + P2P + HTML = Web3
An interesting take on the drama around 1Password switching to Electron:
Too often, when a company stumbles, it’s not because it made a fundamentally bad decision. It’s because it made a decision that benefited itself rather than its customers and lacked the perspective to understand that customers don’t applaud when you lower your costs or the quality of your product.
If you try to sell your customers a product designed to make your business more successful without benefiting them, they won’t thank you for it.
Ditto for web stuff I imagine. Don’t hold your breath for folks to applaud your “ground-up” rewrite/refactor from an old framework to a new one that merely mimics existing functionality.
Why, in some cases, I measure for personal projects:
the impetus for my measurements is curiosity. I just want to know the answer to a question; most of the time, I don't write up my results.
Also, this random factoid:
As consumers, we should expect that any review that isn't performed by a trusted, independent, agency, that purchases its own review copies has been compromised and is not representative of the median consumer experience.
Remember who butters the bread of whatever “review” you read online.
A fun episode.
These were the top-ranked answers to the question: “What’s your favorite HTML element?”
<div>
<marquee>
<button>
<input>
<p>
<script>
If you can believe it, elements such as <a>
and <img>
were not on the list of top-ranked answers. However, as Dave observed on the podcast, “well [this] is a JS podcast, so checks out”
I do strongly encourage the addition of a new HTML element that represents—and can consequently obviate the use of—the ARIA search landmark role. A search element would provide HTML parity with the ARIA role, and encourage less use of ARIA in favor of native HTML elements.
A great rationale and well-articulated justification for a new <search>
element in HTML.
The purpose of ARIA…is to provide parity with HTML semantics. It is meant to be used to fill in the gaps and provide semantic meaning where HTML falls short.
Also, a good reminder about semantics and ARIA:
The first rule of ARIA use in HTML states that you should avoid using ARIA if there is a native HTML element with the semantics of behavior that you require already built in.
I’ve had this in my queue to read for a while, but it’s the kind of post you need to read in your browser not via a feed reader since the inline examples are so deep and illustrative. Anyway, I finally got to reading it’s excellent.
By using different shadows on [different elements], we create the impression that [one] is closer to us than [another]. Our attention tends to be drawn to the elements closest to us, and so by elevating [some elements over others], we make it more likely that the user focuses on it first.
Here's the first trick for cohesive shadows: every shadow on the page should share the same ratio. This will make it seem like every element is lit from the same very-far-away light source, like the sun.
Also of note: the difference between box-shadow
and filter: drop-shadow()
is really neat because box-shadow
is, well, a shadow in the space of a box. However, with filter: drop-shadow()
your shadow will follow the shape of your image or HTML element! filter
is hardware-accelerated, so its more performant too!
There’s qualitative research (stories, emotion, and context) and then there’s quantitative research (volume and data). But there’s also evualative research (testing a hyphothesis) and generative research (exploring a problem space before creating a solution). By my count that gives four possible combos: qualitative evaluative research, quantitative evaluative research, qualitative generative research, and quantitative generative research. Phew!
Applicable to design, since—as Maite Otondo says—research uncovers the reality of today so you can design for the future of tomorrow.
This is an article about python, but it gets there by bashing other scripting language first. This critique of Bash resonated with me:
We tried Bash, but Bash is a terrible, terrible language. Variables do not work as you expect them to work, there are significant whitespaces in your script, spaces in strings are special, and to top it all there are slight differences in CLI utilities you have to use with Bash.
Then this conclusion:
please don’t use Bash. I know, it’s tempting, it’s just “two lines of code”. It always starts small until one day it grows into a unportable, unsupportable mess. In fact, Bash is so simple and so natural to just start using that I had to make a very strict rule: never use Bash. Just. Don’t.
As someone who has been exposed to Bash, tried to use it, never understood it, then always wondered why it was so prevalent (thinking it must be because it’s easy) this made me feel better.
According to Google’s first UX writer, Sue Factor, one of the main reasons why Google decided to go with sentence case was because it was just easier to explain to designers and engineers. In a product interface, it’s not always clear what’s considered a “title.” Is a tab name a title? How about a settings checkbox? Or a confirmation message?
A great breakdown of title vs. sentence case. I have to admit, after reading it, I’m becoming a believer in sentence-casing all the things. My brain is already thinking about how I’d do this on my blog—say, a regex to find/replace title casing on all post titles going back a decade? Must…resist…urge…
Color sits in a continuum—ok, sure—a spectrum, dictated by scientific fact, registered through personal experience, and ossified with shared cultural framing. That sounds fancy, but in short: “Red” is a vague term that is solid in the middle and hazy at its edges. Fights over redness always happen at the boundary of orange-red and red-orange, because the edges of definition are determined by all the stuff that makes other people fascinating, annoying, and real: their perception, their labels, their culture, their location.
Frank at it again with his words. This time talking about color, but also words:
The tech industry is where words go to die. It’s a tragic bit of irony: tech work is all abstractions, and those abstractions can only be considered and revised through precise language. But we are slobs and so poor at wielding language.
The history of the office workspace:
First, bosses got rid of private offices. That dramatically reduced how many surfaces you had control over: Cubicles give you far less space to arrange things as you’d prefer. Then open-offices came along and made things even more miserable, because they destroyed even the vestigial bits of privacy we had with cubicles — as well as the meagre (but still useful) cubicle wall-space you’d use to organize info. And of course, open offices also meant more noise distractions and more interruptions, which were, as [a researcher] argues, possibly the worst blow of all to our thought: “Perhaps the most important form of control over one’s space is authority over who comes in and out.”
The research around gaining control of your office space and environment is telling:
being able to organize your workspace makes you nearly one-third more productive than when you can’t.…as [the author of the experiment said] “three people working in empowered offices accomplished almost as much as four people in lean offices.”
The important piece that gets forgotten is that Agile was openly, militantly anti-management in the beginning
Boom! A great way to start. Now we’re off:
The project manager’s thinking, as represented by the project plan, constrains the creativity and intelligence of everyone else on the project to that of the plan, rather than engaging everyone’s intelligence to best solve the problems.
And then a summary of where we’ve landed in the history of Agile:
It turns out that prioritizing individuals and interactions is a hard concept to sell. It’s much easier to sell processes and tools.
It turns out that working software is harder to produce than unrealistic plans and reams of documentation.
It turns out that collaborating with customers requires trust and vulnerability, not always present in a business setting.
It turns out that responding to change often gets outweighed by executives wanting to feel in control and still legitimately needing to make long-term plans for their businesses.
Where Agile ended up is the antithesis of its vision. You either die a hero or you life long enough to become the villain.
The iron lays in the attempt to scale a concept anchored in the small scale.
Trying to scale a methodology that focuses on individuals and interactions will inevitably lead to problems – and erode the original value of the methodology.
An interesting talk given in 2013 but the presenter, Brett Victor, pretends as if it was 1973. It shows how things we would not have wanted to happen in computers have happened.
The thesis of the talk, however, revolves around this idea: if you’re constrained by believing you know what you’re doing and you know what programming is, then you’ll be unable to see any adjacent ideas that might actually be better than the ones we have now.
The most dangerous thought you can have as a creative person is to think that you know what you’re doing. Because once you think you know what you’re doing, you stop looking around for other ways of doing things and you stop being able to see other ways of doing things. You become blind…
You have to say to yourself, “I don't know what I’m doing.”…Once you truly believe that, then you’re free and you can think anything.
Sadly, from past experience, this mindset of complacency and hoping for the best is the result of natural human mental drift that comes when there are long periods of apparent normalcy. Even if there is a slowly emerging problem, as long as everything looks okay in the day to day, the tendency is ignore warning signals as minor perturbations. The safety of the system is assumed rather than verified—and consequently managers are led into missing clues, or making careless choices, that lead to disaster. So these recent indications of this mental attitude about the station's attitude are worrisome.
Mental drifts resulting from normalcy led to diplomacy efforts trumping engineering concerns resulting in more bugs and an erosion of safety?
That sounds familiar. Maybe the problems with building software are just human problems.
Most people make this mistake, with engineers and developers on Twitter, where they assume the number of followers they have must correlate with how good of an engineer they are. When the only thing a sizeable Twitter following actually shows is how good they are at writing tweets
On giving up on twitter:
Paradoxically, the less I use Twitter, the better I am at my day job, but also the less likely I am to get approached with opportunities to change my day job. So the thing that makes me a more desirable candidate is the thing that makes me less likely to be a candidate in the first place.
So should you?
if someone new to Engineering asked me how to fast-track their career via job-hopping up the ladder, especially in the world of startups, I would suggest they get to tweeting. I would love to say that the most effective thing you could do is work on your skills, and the community will reward your hard work with new opportunities. But that would be dishonest, as unfortunately, it’s not how the world works.
Great reminder from Frank about “the marrow of life”:
the marrow of life lives beyond novelty in the unexceptional. I say this a lot: “the simple things are worth doing well, because they happen every day.” It is my mantra because I am the king of forgetting it. Any goodness that comes to me during the time of Covid will be by attending to what happens each day. The dishes pile up and the dishes get washed. They pile up and get washed. Isn’t that remarkable? It’s today and then today, then today, and today and today.
An interesting commentary on whether the web is actually a credible alternative to the App Store (as Apple claims). This point about cross-engine compatibility resonates with me:
Compatibility across [browser] engines is key to developer productivity. To the extent that an engine has more than 10% share (or thereabouts), developers tend to view features it lacks as "not ready". It's therefore possible to deny web developers access to features globally by failing to deliver them at the margin.
Entertaining (and nostalgic) essay about life before the internet.
I had no influence and never disrupted anything.
The only content users generated was letters to the editor.
Honestly, a lot of this talk was over my head. But the presenter, Bryan Cantrill, was engaging and funny and I couldn’t stop listening.
Being ahead of your time is not commercially fruitful.
I also learned second system syndrome is a thing.
As a reaction to web dev outcry Google temporarily halted the breaking of the web. That sounds great but really isn’t. It’s just a clever tactical move.
…Somewhere in late 2022 to early 2023 Google will try again, web developers will be silent, and the modals will be gone.
A lot gets written about DX. It’s likely we’re misunderstanding each other because we don’t agree on the definitions baked into our assertions.
That’s why I like this post by Dave.
What I love the most, though, is his ending:
Written into the ethos of the Web is “Users over Authors over Implementors…” and I believe we must preserve this principle even in our tools. Otherwise we’re building an internet for developers and not an internet for everyone.
Dan talking about why npm audit is broken. I’ve been there: you run a fresh, up-to-date install of create-react-app
and on install npm tells you your app is already vulnerable.
While an interesting read, what I really liked were these two phrases:
The best time to fix it was before rolling it out…The next best time to fix it is now.
I like that. Applies to my designs and my code.
And then this:
in theory there is no difference between theory and practice. But in practice there is.
There are some things that become so ubiquitous and familiar to us – so seemingly obvious – that we forget that they actually had to be invented. Here’s a case in point – the weblog post’s permalink…
[the permalink] added history to weblogs…before you’d link to a site’s front page if you wanted to reference something they were talking about – that link would become worthless within days, but that didn’t matter because your own content was equally disposable. The creation of the permalink built-in memory – links that worked and remained consistent over time, conversations that could be archived and retraced later.
I enjoyed this whole thread, but this particular tweet was razor-sharp:
As a result, we need to view products over a 5+ year lifespan, rather than through the lenses of a release or a series of sprints. Something that’s very difficult to do when we bounce between jobs every 18 months.
Some other pieces of the thread I liked:
[the tension:] Designers generally believe in exploring the problem space, iterating towards a solution and then launching. Founders often believe in launching, seeing how the market reacts and then iterating/pivoting if needs be.
What you build in the early days is almost certainly wrong, and will most likely get thrown out later on…In the early stages of a new venture, neither the company or the market are really ready for quality yet.
But we do need to understand that for a large part of a product’s life, the process is optimised around speed and efficiency over solution fit. That the most successful designers are essentially pragmatists.
[the designer’s role] isn’t necessarily to come up with the perfect solution right out of the gate, following the structure of the double diamond. But is instead to put something out into the world that’s better than what existed before.
As Obama says: “Well, better is good. Nothing wrong with better.”
the bias towards additive solutions might be further compounded by the fact that subtractive solutions are also less likely to be appreciated. People might expect to receive less credit for subtractive solutions than for additive ones.
Framework fatigue definitely exists. It's also known as innovation in this particular area of software development.
Web development did not change. Web development grew. There are more options now, not different options.
Websites have become business, hence all the features and advancements to support business:
But there's no doubt that it's entirely possibly (and likely) that you're working on a project with a complicated pipeline of tech all connected up. Maybe it's some tools to check your code for errors (linting), and some tools to build and transform your code (like JSX to JavaScript, etc), and some aspect of CI (for tests or automated accessibility checks) and then some provisioning and staging environment (Netlify, Google Cloud, etc) and then some end point analytics or smoke tests.
But that's because the businesses online have evolved and grown. In 1997, if your company was exclusively online you were either an innovator or a fool that was going to be quickly parting with their investment. Today, an exclusively online business is completely normal - so it's understandable that the parts that go into supporting that business are larger and more involved.
And then this point:
if you wanted to sell an old monitor on a site like ebay, you're not going to set up a limited business, file for VAT registration, appoint an accountant, get insurance and all the other very involved complicated tasks.
What’s sad is that business web development eventually becomes the norm. Extending Remy’s metaphor, yeah, you wouldn’t setup a business to sell a monitor online. And yet—have you ever felt come tax time you need your own accountant? The tax rules have become so complex, you feel you’re losing out if you don’t have someone in your corner who knows the rules and can dance with them?
Remy’s main point is not lost on me: the web didn’t become more complicated. It grew. The simple stuff is still simple, but now there’s the more complex stuff too. But you’re not beholden to use it.
I use hand sketches as long as I can to communicate concepts to stakeholders and teams.
I believe in showing the work at the fidelity of the thinking.
The higher the fidelity of the image, the higher fidelity the perceived thinking is behind it.
In a few years, all new TVs will have operational cameras. All new TVs will watch the watcher. This will be pitched as an attractive new feature. We’ll be told that, thanks to the embedded cameras and their facial-recognition capabilities, televisions will henceforth be able to tailor content to individual viewers automatically. TVs will know who’s on the couch without having to ask. More than that, televisions will be able to detect medical and criminal events in the home and alert the appropriate authorities. Televisions will begin to save lives, just as watches and phones and doorbells already do. It will feel comforting to know that our TVs are watching over us. What good is a TV that can’t see?
We’ll be the show then. We’ll be the show that watches the show. We’ll be the show that watches the show that watches the show.
Nicholas Carr—always on point.
the operators of the machines that gather our signals. We’re the sites out of which industrial inputs are extracted, little seams in the universal data mine. But unlike mineral deposits, we continuously replenish our supply. The more we’re tapped, the more we produce.
The game continues. My smart TV tells me the precise velocity and trajectory of every pitch [in baseball]. To know is to measure, to measure is to know. As the system incorporates me into its workings, it also seeks to impose on me its point of view. It wants me to see the game — to see the world, to see myself — as a stream of discrete, machine-readable signals.
I sometimes despise that, on the web, we’ve come to accept this premise—to know is to measure, to measure is to know. As if what cannot be measured does not exist. Pic or it didn’t happen. Tree witnessed or it didn’t fall. Feedback or flatline.
I’ve always wondered about for
and for-in
and foreach
and for-of
. Through my own trial and error, my perception has become:
for
is old school, what I learned in computer science 101forEach
is what I’ll use most of the time, except when doing async stuff because await
doesn’t work inside itfor-in
I use for async—or wait, should that be for-of
? Meh, I’ll just use await Promise.all
.
The lazy person in me finally gets to understand all the fors. Now we’ll just see if I can remember it.
The personal brand, that groan-inducing pillar of digital existence
“Groan-inducing”—just love that description of the personal brand.
Offline we exist by default; online we have to post our way into selfhood. Reality, as Philip K. Dick said, is that which doesn’t go away when you stop believing in it, and while the digital and physical worlds may be converging as a hybridized domain of lived experience and outward perception, our own sustained presence as individuals is the quality that distinguishes the two.
Content-Disposition
headerdownload
attributeblob:
and data:
A concise summary of how to get the browser to download things. I've used the blob:
one before. It’s a rather neat trick for downloading files.
It’s one thing for a department to over-ride UX concerns, but I bet they’d think twice about jeopardising the site’s ranking with Google.
This resonates in my bones.
Create an extended product roadmap and put those items at least a year off into the future “and as long as they don’t seem relevant, you can just keep pushing them into the future.” Perversely this plan made everyone happy – everyone’s feedback is on the roadmap, and now it’s all just a question of priorities.
Seemingly perverse, yes. Useful? Also yes.
Roadmaps have use beyond 6-week priorities. Sometimes you have to bring people along for the ride, giving acknowledgement to their voice even though you never plan to act on it.
The problem with all the bad advice was that it was unrelated to the problem we were trying to solve…
The solution to every problem can’t be the same
Boom!
The solution to every problem can’t be the same? Try telling that to a framework.
Uncontingent advice is what I think of when I hear the term thought-leader - someone has a single solution that seems to fit every problem. Whatever problem you face, the answer is test-driven development or stream architectures or being-really-truly-agile.
I get frustrated by advice like that but is it wrong? Unit testing, streaming architectures, agile are all good things.
It’s a good point. It’s usually good advice. But it’s not always pertinent advice given constraints like resourcing, time, ambitions, goals, etc.
One way to think about advice is as a prediction. Advocating for TDD can be viewed as a prediction that if you don’t write tests before you write code, your project will be less well-designed and harder to maintain.
Good observation. Advice is prediction. Do x
and you’ll get y
. It’s risk mitigation.
Software development is full of confident forecasters. We are a pretty new field, and yet everyone seems so sure that they have the best solution to whatever problem is at hand.
The proverbial “silver bullet”.
A great tool is not a universal tool it’s a tool well suited to a specific problem.
So what’s the takeaway?
The more universal a solution someone claims to have to whatever software engineering problem exists, and the more confident they are that it is a fully generalized solution, the more you should question them. The more specific and contingent the advice - the more someone says ‘it depends’… the more likely they are to be leading you in the right direction. At least that’s what I have found.
A fascinating and enlightening dive into how emoji works:
To sum up, these are seven ways emoji can be encoded:
- A single codepoint
🧛 U+1F9DB
- Single codepoint + variation selector-16
☹︎ U+2639 + U+FE0F = ☹️
- Skin tone modifier
🤵 U+1F935 + U+1F3FD = 🤵🏽
- Zero-width joiner sequence
👨 + ZWJ + 🏭 = 👨🏭
- Flags
🇦 + 🇱 = 🇦🇱
- Tag sequences
🏴 + gbsct + U+E007F = 🏴
- Keycap sequences
* + U+FE0F + U+20E3 = *️⃣
Great article by Una introducing container queries. I haven’t played with them yet, but have always wondered where the intersection is between media and container queries.
For example: why, and in what scenarios, would you use @media over @container? From Una:
One of the best features of container queries is the ability to separate micro layouts from macro layouts. You can style individual elements with container queries, creating nuanced micro layouts, and style entire page layouts with media queries, the macro layout. This creates a new level of control that enables even more responsive interfaces.
This idea of micro/macro layouts turned on a light bulb in my head. And her breakdown and visual examples in this post are great.
I’ve seen this intersection of media and container queries being called “the new responsive”.
Once upon a time the web was supposed to be system for sharing carefully structured information, full of sensible metadata and collaboration. Instead, we turned it into an semi-opaque app delivery model running in a browser sandbox.
And:
we take it for granted that we can read the code we’re running, examine the markup we’re seeing, and review the CSS that styles it. But all these aspects of web development may be nothing more than a brief and transient anomaly in the history of software design.
Take for granted is right.
Lots of frank observations in this article.
First up, Google:
FeedBurner and Google Reader were not victims of Google’s policies. They were the weapons Google used to ensure that the only player extracting value from blogging was Google.
Next, Microsoft:
Microsoft is just carpet-bombing the web developer community with open source software and OSS infrastructure. Typescript, Visual Studio Code, GitHub, npm, and so much more exist primarily because Microsoft executives believe this will lead to more business for Azure and other Microsoft offerings
And now online educators:
web dev education, training, and recruitment exist primarily to extract value from Facebook’s React or Google’s OSS projects. Very few of them invest in figuring out what sort of training will serve their students the best. The easiest thing to sell to both recruiters and students is the big framework on the block, so that’s what they sell and very little else
Then this introspective question near the end:
Chrome and React are strategic levers for Google and Facebook. Electron, GitHub, Visual Studio Code, TypeScript, and npm are all potential strategic levers for Microsoft. V8, npm, React, Visual Studio Code, and Github: they are the foundation of modern web development.
How confident are you that all of these projects will remain strategic for the life of the web? Losing any one of them would knock the entire software economy to the ground. Are we so sure that nothings going to change for these companies?
Oh, and this jab at technological diversity on the web:
Diversification will mostly be a question of which flavour of V8 and what flavour of React-like front end framework you’re using.
The conclusion:
If we want software development to last, then we need to work on our attitudes towards open-source and reconsider our reliance on software that, at the moment, happens to be strategically relevant to big tech.
The policies are openly hostile to the workforce. They amount to a declaration that the employees can’t be trusted to do their job without handholding.
It is ironic how Basecamp always preached against micromanaging remote employees who you can’t see and control—as you might in a classic work environment—and to instead simply trust people to be productive and effective.
But mostly I just really loved this line:
We need to become better at distinguishing between those who speak from practice and those who are just performative social media influencers.
A resonating critique of the “flat, geometric, figurative” illustration aesthetic named “Corporate Memphis” that has invaded much of the design world, depicting humans as “nondescript figures“ composed of solid colors:
It’s an aesthetic that’s often referred to as ‘Corporate Memphis’, and it’s become the definitive style for big tech and small startups, relentlessly imitated and increasingly parodied. It involves the use of simple, well-bounded scenes of flat cartoon figures in action, often with a slight distortion in proportions (the most common of which being long, bendy arms) to signal that a company is fun and creative. Corporate Memphis is inoffensive and easy to pull off, and while its roots remain in tech marketing and user interface design, the trend has started to consume the visual world at large. It’s also drawing intense criticisms from those within the design world.
Corporate Memphis has been particularly popular in the fintech and property sectors. For small companies looking to stand out, the quirky, digital-first style of Corporate Memphis is an easier solution than stock imagery. But it has contributed to a massive homogenisation and dulling down of the internet’s visual culture. Corporate Memphis “makes big tech companies look friendly, approachable, and concerned with human-level interaction and community – which is largely the opposite of what they really are,” says tech writer Claire L. Evans, who began collecting examples of the style on an are.na image board in 2018.
Since he first recognised the style, Merrill says he’s identified two types of company that use it. Smaller companies engage in “pattern-matching” to look like established tech companies and court investment, says Merrill, while those at IPO-level use it because it’s “lazy and safe.”
We used HBase as our primary data store, because it was designed to scale, just like us. In the future, we would power a loosely federated Amazon, composed of independent online retailers, knit together by our software. If we built it, they would come.
But they didn’t. The only thing that kept us afloat was a bespoke product we had built for eBay, because our CEO knew an executive there. Later, after I left, that same executive moved to Staples and convinced them to acquire the startup outright. Nothing we had built was useful to Staples, it was just evidence of our ability to “innovate”. The resulting “skunk works” team has since been disbanded.
Hey look at that: business still very much works knowing people. Success, or rather the ability to bring in revenue, isn’t always about the ingenuity of the product but rather the personal connections of the owners.
I told myself...our business was data, not ads.
I was wrong, of course. The advertising side of the company was where all the growth was happening, and our product direction was whatever helped them close deals.
And:
Our product and pricing model both required unjustifiable levels of trust from our prospective customers, but none of us saw that as our problem to solve. We were downstream of the business model; our job was simply to wait and prepare for its eventual success.
And one last jab at the technologist mindset:
we’re conflating novelty with technological advancement
First, a good critique:
This is the first kind of novelty-seeking web developer. The type that sees history only as a litany of mistakes and that new things must be good because they are new. Why would anybody make a new thing unless it was an improvement on the status quo? Ergo, it must be an improvement on the status quo.
But then this commentary on the architecture of the web stood out to me:
By default, if you don’t go against the grain of the web, each HTTP endpoint is encapsulated from each other. From the requester’s perspective the logic of each endpoint could be served by an app or script that’s completely isolated from the others. Fetching from that endpoint gets you an HTML file whose state is encapsulated within itself, fetches its visual information from a CSS endpoint and interactivity from a JS resource. Navigating to a new endpoint resets state, styles, and interactivity.
Moreover, all of this can happen really fast if you aren’t going overboard with your CSS and JS.
It reminds me of the moment when I first learned about the details of authentication on the web. I was at a conference and the presenter said, referring to the underlying HTTP protocols, “the web is stateless”. Having just learned react, this was strange to me. But in listening more, I realized how wonderfully powerful this idea of statelessness can be.
The web’s built-in encapsulation would limit it to trivial toy-like projects if we didn’t have a way to build larger interconnected projects. But, instead of the complex shared state that defines most native apps and SPAs, the web represents state as resources that are connected to and transferred via links.
Work, work, work:
Neoliberalism, the prevailing ideology of our times, continues to eat the world. Under neoliberalism, “the market” and an illusory “freedom of choice” are the organizing principles governing human bodies. Employment/employer have seized the scepter that was once held by religion/church. “What do you do?” is the de rigeur ice-breaker question of our times. Whether we like it or not, the tides of Western culture, at least in the US, have plunged us into a worldview (usually unspoken and unexamined) that makes work the center of one’s life. It’s not a surprise that most workplaces are flowing along with that tide. It is in the nature of tides that few can resist them. At a large enough company, you can practically live your entire life on the company campus: eat, exercise, shower, get child care, sleep, play, relax, do yoga, get medical attention. There was a time when this kind of lifestyle was viewed as dystopic. It’s a relatively recent invention (the last century or so) that we expect the average person not only to work, but to have a vocation. For most of the previous millenia, it was viewed as a kind of doom or failure to be employed by an employer (serfdom). Attitudes have shifted over the past century, coinciding with the loss of influence from historically powerful religious and secular institutions. That power vacuum was filled by work. Work as the center of one’s life. Work as an identity. Work as the only place that people gather with folks outside their immediate circle of family and friends.
A beautiful reflection on the story behind Apollo 8 and their capturing of the “earthrise” photograph—the first photo of Earth from space.
We didn't have any specific directions about the use of photography. We were all on the earth, so we all knew about the earth. They wanted photographs of something that was unusual [like] close-ups of the far side of the moon. The earth? It was strictly secondary.
More than the documentary details of the earth-rise photo, this is a musing on our existence in the cosmos.
I made the classic mistake. I started out with small updates; type, color, logo. And published those posts after each chunk of work. But then? I made a mess. I made a git branch with the ominous name "rebuild." So, of course, that turned into an amorphous catch-all.
Been there. Done that.
That leads to feelings of lack of accomplishment and a constant feeling of something in the distance that you know you want to get to, but can't will yourself to move towards. I guess? I dunno? Who knows? What was I even talking about? Oh yeah. Web design.
Tyler updated his site and I absolutely love it.
If the data or the back-end requires you to do something, it doesn't mean that's how users should think about a problem. It's a common mistake in UI design...complexity on the back-end doesn't mean you should show that complexity on the front-end....
It's hard doing that though because first you have to understand the back-end. Then you have to unlearn it.
Like Robin, I struggle with this as well. There’s forever a tension between how the system works, how the end user thinks about the task they want to accomplish, and what timeline you have to bridge the gap between the two.
Art-directed blog posts are a trap. For instance, I have a silly little post about hair band albums that I've been nudging for months. I couldn't get it to look how I wanted, and I ended up scrapping the idea. Mind you, the post took about an hour to write. I could have pushed send at that point and gone on to the next post. I asked myself: how many more posts could I have written instead of trying ten different shades of 80s-inspired pink?
I realized that if I cared about publishing in any consistent manner, I'd have to give up the obsessive art-directed posts (which is hard to do when it's a creative outlet). So I decided to keep some flexibility to art direct; I'd just put up some constraints. Dave Rupert has a great (and practical) article about art-directed posts that made a lot of sense to me. Instead of giving myself total freedom, I've limited the number of decisions I can make. I can change the font, and I can change the color scheme.
Like Regan, I feel the tension of art directed posts being an impediment to publishing.
My goal (right now) is to write and publish, so I do it over and over. This helps me become better at writing and by extension thinking.
My goal (right now) is not to get better at making things look awesome. If that were my goal, I’d likely take on art directed posts again.
But there’s always that question: are people coming to your blog to read it or to look at it?
I’ve never found myself with the need to use physical lengths in CSS such as in
, or cm
, or mm
.
Nonetheless, I stumbled on this historical background describing why real-world units in CSS has been and will be a failure. I found it intriguing:
Originally, all the “real world” units were meant to be accurate physical measurements. However, in practice most people authored content for 96dpi screens (the de facto standard at the time of early CSS, at least on PCs) which gave a ratio of 1in = 96px, and when browsers changed that ratio because they were displaying on different types of screens, webpages that implicitly assumed the ratio was static had their layouts broken. This led us to fixing a precise 1in:96px ratio in the specs, and the rest of the physical units maintained their correct ratios with inches.
Later, Mozilla attempted to address this again, by adding a separate “mozmm” unit that represented real physical millimeters. This ran into the second problem with real physical units - it relies on the browser accurately knowing the actual size and resolution of your display. Some devices don't give that information; others lie and give info that's only approximately correct. Some displays literally cannot give this sort of information, such as displaying on a projector where the scale depends on the projection distance. Authors also used mozmm for things that didn't actually want or need to be in accurate physical units, so when mozmm and mm diverged, they were sized badly.
The overall conclusion is that trying to present accurate real-world units is a failure; browsers can't do it reliably, and authors often misuse them anyway, giving users a bad experience
Lots of folks online are linking to this article by Mandy Brown—and for good reason. Here’s what resonated with me:
I am not actually a fan of the “remote” terminology: I prefer to talk of teams as being either co-located or distributed, as those terms describe the team not the individual. After all, no one is remote all by themselves. But if we’re going to be stuck with that term, and it seems like we are, then we have to ask—remote to who? Perhaps you are remote to your colleagues, but you can be deeply embedded in your local community at the same time. Whereas in a co-located environment, you are embedded in your workplace and remote to your neighbors.
And then this:
Because if remote work gives us anything at all, it gives us the chance to root ourselves in a place that isn’t the workplace. It gives us the chance to really live in whatever place we have chosen to live—to live as neighbors and caretakers and organizers, to stop hoarding all of our creative and intellectual capacity for our employers and instead turn some of it towards building real political power in our communities.
And this:
As offices start to reopen there are going to be more and more pieces about the minority of CEOs who buck the trend of hybrid remote work and tell their entire staff to get back to their desks full-time. They will say it’s because collaboration and creativity are better when people are all in the same room, that the companies who continue to pay for expensive offices will end up with a competitive advantage. I promise you those CEOs are the ones looking at the balance sheet and doing a calculation in their head that says that even though remote work might save them millions on real estate, the transfer of power to their employees would be too great to make that a good deal.
If you make websites, a thing you should know is that complete redesigns are oftentimes political, and not stemming from user demand. It’s a move to claim ownership over those who came before you.
From now on, whenever I see a redesign announced to the world, I am going to ask myself: “hmmm...I wonder who is claiming control over what over in that business?”
It’s not that “art is important and rare”, and thus valuable, but rather that the artists themselves are important and rare, and impute value on whatever they wish.
I’m late to the Taylor Swift re-recoding her own album news, but found Ben’s take intriguing.
Really enjoyed this quote from Edward Tufte:
In the book The Visual Display of Quantitative Information, data visualization designer Edward Tufte introduced the data-ink ratio concept, which is the proportion of ink devoted to the non-redundant display of data-information.
From this, Tufte derives one of his most famous principles: “Erase non-data-ink, within a reason. Ink that fails to depict statistical information does not have much interest to the viewer of a graphic; in fact, sometimes such non-data-ink clutters up the data, as in the case of thick mesh of grid lines.”
If we do not have the capacity to distinguish what’s true from what’s false, then by definition the marketplace of ideas doesn’t work. And by definition our democracy doesn’t work. We are entering into an epistemological crisis.
This excerpt is what drew me in to this interview. It’s long, but there were a number of insights in it that stuck out to me. I’ve noted some for myself elsewhere, but I wanted to note a few here in a professional vein.
First up, this observation about life being like high school is interesting. It might be maddening at first, but as Obama points out, it should be empowering:
You’re in high school and you see all the cliques and bullying and unfairness and superficiality, and you think, Once I’m grown up I won’t have to deal with that anymore. And then you get to the state legislature and you see all the nonsense and stupidity and pettiness. And then you get to Congress and then you get to the G20, and at each level you have this expectation that things are going to be more refined, more sophisticated, more thoughtful, rigorous, selfless, and it turns out it’s all still like high school. Human dynamics are surprisingly constant. They take different forms. It turns out that the same strengths people have—flaws and foibles that people have—run across cultures and are part of politics. This should be empowering for people.
Lastly, I wanted to note Obama’s own admitted disposition towards optimism intertwined with this idea of forgoing changing the world and instead iteratively improving the world;
I think it is possible to be optimistic as a choice without being naive...[However] being optimistic doesn’t mean that five times a day I don’t say, “We’re doomed.”...
The point I sometimes make [is] “Can we make things better?”
I used to explain to my staff after we had a long policy debate about anything, and we had to make a decision about X or Y, “Well, if we do this I understand we’re not getting everything we’re hoping for, but is this better?” And they say yes, and I say, “Well, better is good. Nothing wrong with better.”
Nothing wrong with better, indeed.
Dax, admitting his disposition to being a control freak, asks Bill how hard it was for him to learn to delegate and if it was one of Bill’s biggest challenges at Microsoft. Bill answers with an interesting and retrospective look at how he had to change his mental model, going from writing code to organization and orchestrating people (at about ~40:40):
Yeah, scaling [was] a huge challenge. At first I wrote all the code. Then I hired all the people that wrote the code and I looked at the code. Then, eventually, there was code that I didn’t look at and people that I didn’t hire. And the average quality per person is [went down], but the ability to have big impact is [went up]. And so that idea that a large company is imperfect in many ways [is true] and yet it’s the way to get out to the entire world and bring in all these mix of skills. Most people don't make that transition and there are times when you go “oh my god, I just want to write the code myself.” The famous thing I used to say is, “I could've come in and written that myself over the weekend.” Well, eventually I couldn’t.
when you make it up as you go, you get to do what you think, not what you thought. All plans are rooted in the past — they're never what you think right now, they're what you thought back then. And at best, they're merely guesses about the future. I know a whole lot more about today, today, than I did three months ago. Why not take advantage of that reality?
I really like this—but that might be a biased take because it’s how I live my personal life. No life roadmap. Just some “big picture directional ideas”. To be honest, I’ll probably use Jason’s rationale here for how justifying how I live my life.
What he's worrying about is the engineering problem of: how do we build working, reliable systems out of this distributed, computational material. What I’m worrying about...is the humanist problem of: what should we build, why are we building it, what will we do with it, and what is it going to do to us?
It’s so easy to get caught up in the engineering minutiae—How will we do this? Is it even possible? Those humanist questions Bret outlines are great questions to ask yourself before you build anything.
Design and data are not at odds with one another. One helps you understand phenomena and gives you a foundation on which to build your assumptions. The other is the joyful process of creation to solve problems based on those assumptions.
Can’t believe I’m linking to something on LinkedIn, but here we are. Julie’s observations resonate with me. She continues:
If design intuition tells you that some experience is bad (because it's hard to use, it's confusing, etc.), TRUST the intuition...
If design intuition tells you that A works better than B at a large scale, be wary...
data helps you become a better designer. But data by itself does not lead to wonderful things. You still have to design them.
Before you write any code — ask if you could ever possibly want multiple kinds of the thing you are coding...
It is a LOT easier to scale code from a cardinality of 2 to 3 than it is to refactor from a cardinality of 1 to 2...
I’ve seen this so many times, especially in places where I thought there would never be more than one. I like that swyx has a name for it.
Even at my current job, we’re working on a multi-year problem that shifts from a fundamental assumption of the platform that there’s only ever one of a thing, when now we’re realizing to keep pace with the market we need multiple.
It’s worth noting: you never see the cases where you don’t have to convert to more than one, so you feel the pain when you have to convert but the joy when you don’t have to.
Josh Fremer being quoted on QuirksBlog on the topic of: "what's the difference between accessibility and progressive enhancement?"
I think of [progressive enhancement and accessibility] as the same process, but with different foci. Accessibility aims to optimize an experience across a spectrum of user capabilities. Progressive enhancement aims to optimize an experience across a spectrum of user agent capabilities...
What is the application of color to a website if not a progressive enhancement targeting user agents that can discern colors? Whether the "agent" in this case is the electronic device in the user's hands or the cells in their eyes is kind of moot. The principles of both PE and accessibility require us to consider the user who is unable to receive color information.
What an interesting idea: user agents being human beings or electronic devices, doesn't matter, it's all about starting with the most basic functionality and enhancing from there.
That’s an interesting concept to think about, especially in light of his Josh's final point:
a fun little thought experiment is to imagine a sci fi future in which users can plug computers directly into their brains, or swap their personalities into different bodies with different capabilities. This further blurs the line between what we traditionally call a "user agent" and a user's innate disabilities. "This web site is best viewed in a body with 20/20 vision and quick reflexes."
We have this myth that software is zero marginal cost, ignoring the complex human interdependencies that are required to maintain it.
I found this talk incredibly insightful. I need to get her book. She’s put a lot of thinking and research into the aspects of software that people often ignore: namely, every aspect of software that’s not building it. We always talk about creating software but never maintaining it:
“Write code and forget about it” simply isn't a realistic vision of what's required to make and move these systems. Software is brittle, unreliable, subject to breakage at all times, and an endless exercise in failing over and over again.
An interview with the founder of Evernote:
What’s wrong with Silicon Valley
The business model being indirect revenue. It rewards keeping your users in a heightened, emotional state so that they hang around your platform for as many hours as possible, so they can click on ads.
The easiest emotional state to generate algorithmically is tribal outrage. It’s a simple and primal emotion. We, as the tech industry, have built a model that we make money when we piss people off. And everyone’s pissed off now, we’ve made a lot of money and people are like, what went wrong? Well, everything went exactly as planned.
Reductive, perhaps, but it resonates.
I’ve watched hundreds of people interact with forms and seen many of them struggle. But not once was that down to the use of a conventional text field.
It’s almost like we should choose boring technology UI.
Eric Bailey, speaking on some research work he was doing designing a dashboard:
That dashboard would have been a month or so of work for me, but it would have been the participant's everyday experience for the foreseeable future. That's a huge responsibility.
As a designer, this is a good reminder of your impact on humans, irregardless of scale.
I’m just really enjoying Jason’s blogging now that he’s got Hey World:
I don't think about competing. Competition is for sports, it's not for business.
HEY is simply an alternative...
And all we have to do is get enough customers to make our business work. That's it. That's how we stay alive. Not by taking marketshare away from anyone, not by siphoning off users, not by spending gobs of cash to convince people to switch. We simply have our own economics to worry about, and if we get that right, we're golden.
When you think of yourself as an alternative, rather than a competitor, you sidestep the grief, the comparison, the need to constantly measure up. Your costs are yours. Your business operates within its own set of requirements. Your reality is yours alone.
I do like this idea of complexity sneaking into our lives and having to actively fight it, almost like it's roaches or something.
Great talk. Funny and candid. A thoughtful rebuke of many commonplace ideas in tech today:
I mentioned perseverance because there's this pernicious idea that comes from startup world that you should "fail quickly". I've always been a proponent of failing really really slowly because if you aren't in it for the money you don't know when you've succeeded or if you might succeed. Success doesn't come labeled in any way to distinguish it from failure—unless you're in it for money in which case it's really easy to count your success.
Later he talks about how Thoreau wasn’t much of a success in his lifetime. Only later were the insights of his writings recognized as ahead of their time. Maciej then weaves that narrative into his own points about perseverance and being open with money when he sarcastically points out the absurdity of financials as a measure for his definition of “success”:
I earned $181,000 from pinboard last year, which is 23,000 times as successful as Henry David Thoreau. Not bad at all.
And then this:
[You should write things down because] experience is hard-won knowledge and you don't want to just let it get away.
And lastly this:
We can't depend on big companies to take a stand for us.
Great discussion herein on modern web tooling and the absolute chasm between trying to run a project with tooling vs. no tooling:
You’re no longer developing a web application. You’re developing a code base for producing that web application.
Email is the internet's oldest self-publishing platform. Billions of emails are "published" every day. Everyone knows how to do it, and everyone already can. The only limitation is that you have to define a private audience with everything you send. You've gotta write an email to: someone.
Email client as publishing platform: audience ranging from one to the entire internet. Fascinating take from Basecamp folks on publishing a blog. I think this will be great for lots of people who aren’t tech savvy and simply want to write stuff and publish it online. Not sure how they’ll handle basic blogging features in the future, like tagging and/or categorizing posts. Nonetheless, exciting to see them enter the blog space in an interesting way.
The solution I've cobbled for us is that Julie (my partner) owns the music account that runs our bedroom and kitchen devices, I own a music account that's used on my desktop computer...my son is using an Alexa that's connected to his music account...and my daughter has a Google nest...signed in to a spare phone that's entirely signed in for her connected to her music account.
This Jenga tower of tech just about works, except my daughter can't control her lights from her device. "Make my room red!" she cheers to much disappointment from Daddy who explains: "I just can't work it out yet".
I am a goddamn hostage to tech.
This is why I have not yet put any smart tech in my house. Perhaps it’s an inevitability as my kids grow, but our music is a Bose CD/radio player, a record player, or an iPhone plugged in to my Bose and controlled manually. Maybe it’s simply because I grew up on them, but I really enjoy the experience of CDs. Plus they are incredibly cheap at thrift stores these days. Every album I ever wanted as a teen is now $1.00 at the thrift store.
I’ve come to accept that if there are bugs on the web or if there’s a massive quality dip on a site you’re visiting…that is a sign the web is working. It means some unqualified person was able to skip past the gatekeepers and put something online. That’s a win for “The web is for everyone.”
Unbelievably great point and I agree wholeheartedly. Also loved this:
I wish we’d see the web more for itself, not defined by its nearest neighbor or navel-gazing over some hypothetical pathway we could have gone down decades ago.
Browsers can’t break the web. They need to support the bleeding edge but also the sins of the past.
Great point by Christian Heilmann. Browsers, of all software, have it tough. Give ’em a break sometimes.
[A/B testing is] seen as a cheap solution to doing hard work. I believe it’s not the panacea that everyone thinks it is.
A great summary of a twitter thread from Jared Spool on A/B testing. Resonates loudly with my experience.
Cutting insight from Eric Bailey:
Blogrolls mostly fell on the wayside as the web matured and industrialized. In an era that is obsessed with conversion funnels, the idea that you’d want to provide a mechanism to voluntarily leave your website seems absurd.
Love this post from Marty Cagan.
I’d like to discuss my single favorite coaching tool for helping product managers become exceptional: the written narrative.
Oh, you mean a spec?
I am not talking about a spec of any sort. A spec is not intended to be a persuasive piece – it’s just a document describing the details of what you want built.
Ah, ok, so not a spec. So what?
I’m talking about a document that describes the vision of what you’re trying to achieve, why this will be valuable for your customers and for your business, and your strategy for achieving this vision. If this narrative is done well, the reader will be both inspired and convinced.
I love the idea of a written narrative—mere prose, thoughtfully written paragraphs of text—for describing vision, value, and strategy. How and why can this be so effective? Because, as Stephenie Landry explains:
[With written narratives] you can’t hide behind complexity, you actually have to work through it.
You think you know something until you have to explain it—not in the way of specifying minute details for people, but in the way of inspiring, persuading, and including people.
Love this point, as well, by Brad Porter:
When I begin to write, I realize that my ‘thoughts’ are usually a jumble of half-baked, incoherent impulses strung together with gaping logical holes between them.
Good reminder from Gruber that email clients are, unfortunately, web browsers without all the protections of an actual browser:
Don’t get me started on how predictable this entire privacy disaster [regarding spy pixels] was, once we lost the war over whether email messages should be plain text only or could contain embedded HTML. Effectively all email clients are web browsers now, yet don’t have any of the privacy protection features actual browsers do.
I’m doing good work. Or am I? “Good” is whatever wins votes. Am I focusing on the wrong things? Does design even matter? What would other designers think if they saw my work? They’d probably laugh at it. None of this looks like the design industry’s idea of “good” design. Would they even think of this as “design” at all? Mostly I help make decisions about product behavior, but it’s all so invisible. How could anyone evaluate it? How am I supposed to measure my own self-worth with it?
Carolyn’s writing is incredibly refreshing:
But if the work of this year has taught me anything, it’s that getting something, anything out the door in time can make all the difference. Progress over perfection. One foot in front of the other. So here I am, telling an incomplete, imperfect, unsatisfying story, and sharing it with the world before it’s capital-R Ready.
An illuminating look at the security concerns of allowing third-party browsers in iOS. I always thought Apple's rule—“Apps that browse the web must use the appropriate WebKit framework and WebKit JavaScript.”—wasn’t so great. But there is an interesting security angle on this I’d never considered:
If an app could receive device access permissions, and then included its own framework that could execute code from any web site out there, [the requirement for “what’s changed” notes] in the app store review guidelines would become meaningless. Unlike apps, web sites don’t have to describe their features and product changes with every revision.
This becomes an even bigger problem when browsers ship experimental features...which are not yet considered a standard...By allowing apps to ship any web framework, the app store would essentially allow the “app” to run any unaudited code, or change the product completely, circumventing the store’s review process.
...when considering the current state of web standards, and how the dimension of trust and sandboxing around things like Bluetooth and USB is far from being solved, I don’t see how allowing apps to freely execute content from the web would be beneficial for users.
So interesting. There’s more:
Without drawing a line of “what’s a browser”, which is what the Apple app store essentially does, every app could ship its own web engine, lure the user to browse to any website using its in-app browser, and add whatever tracking code it wants...I agree that perhaps the line in the sand of “Only WebKit” is too harsh. What would be an alternative definition of a browser that wouldn’t create a backdoor for tracking user browsing?
The details in this piece helped me better understand the technical merits that Apple and Mozilla have on to their more defensive approach to building web browsers.
Lots in here that resonates with my own feelings on similar topics:
I still remember the debates with colleagues about using babel a few years ago. Within the front end development world, transpiling had just become a thing, so we ended up babelifying our builds to use ES6. Our argument back then was that one day, we would be able to push our application's directory structure on a web server and since all browsers would then support the augmented ES6 features, our app would just work! Without a build process. WOW! That must have been around 2015. When I look at the source code of these old applications now, our technical visions didn't end up becoming reality.
Looking back, I do find it interesting how babel was thought of (at least in some regards) as a polyfill: use it to write the latest and greatest and then, one day, simply remove it. [Narrator voice] but transpiling is a dangerous drug.
I also try to avoid transpiling. It's not because I don't like ESNext features, but more because I want to minimize the risk of getting stuck with the transpiler.
Long before computers were invented, elders have been telling the next generation that they've done everything that there is to be done and that the next generation won't be able to achieve more. Even without knowing any specifics about programming, we can look at how well these kinds of arguments have held up historically and have decent confidence that the elders are not, in fact, correct this time.
...Brooks' 1986 claim that we've basically captured all the productivity gains high-level languages can provide isn't too different from an assembly language programmer saying the same thing in 1955, thinking that assembly is as good as any language can be and that his claims about other categories are similar. The main thing these claims demonstrate are a lack of imagination.
A good reminder, I would venture, that even core web technologies will be pushed in the future. You don’t have to accept everything that comes down the road of innovation, but being able and willing to keep an open imagination is what’s important.
The full cost has to factor in both the price we’re paying for the service, as well as the impact it has on the business.
While specifically talking about A/B testing, this is a good reminder that nothing is free. Saying yes to one thing means saying no to many others. Cost is never solely composed of what does it cost for the thing I say yes to, but also what is the cost of saying no to all the other things?
A neat behind the scenes look at how Dave does his own art directed blog posts. Personally, I think he strikes a great balance between customizability and maintainability, with an elegant yet simple approach to how much he can tune each individual page.
Dave’s piece really makes me want to do art direction. However, I’ve done it in the past and I always felt like the custom styling got in the way of writing and publishing. And right now, I want to write—a lot.
That said, if I ever do venture the path of art direction anew, I might try one-off posts. Like this:
If you want to go all out and create the weirdest page on the Internet, you don’t have to hijack the system. You can eject from your layout entirely by choosing not to specify a layout. That alone gets you most of the way there! A page can be a hand coded page that atrophies at its own pace away from the parent styles is a great way to limit your technical debt.
That speaks to me. Once I publish, I never have to touch it again unless I want to, not because my site changes (which, let’s be honest, it inevitably will, about a million more times).
The insights here resonated with me and the way I viewed my own decision making coming out of college—and I didn’t go to an Ivy League school, but a state college.
For the majority of Ivy Leaguers, the most impressive thing they've accomplished is achieving admission to their university. When you're deemed successful because you went to Harvard rather than celebrated for what got you there in the first place, you learn to game the system and just focus on the credentials the next time around.
And later:
Sometimes, achieving excellence even runs orthogonal to the certainty of prestige. For example, I saw within my own studies that getting an 'A' in a class was very different than actually learning the material. With an intense course load and impending deadlines, many students find it easier to take shortcuts to get the 'A' rather than to really grapple with the material which could take time away from learning how to game the test. The same problem happens within the workforce, except instead of getting an 'A' in a class, it's optimizing to get promoted during your annual review.
And yet later:
But what worries me most about the prestige trap are its effects on an individual level. While recruits may confuse a Stanford CS degree for evidence of world-class programming skills, the candidate won't. We know when we're optimizing for credentials vs. pursuing excellence for its own sake. There is something deeply fulfilling about the latter and rather unsatisfying about the former.
Lots of introspective insights here worth pondering.
The interesting thing about the web is that you never know who you’re building stuff for exactly. Even if you keep statistics. There are so many different users consuming web content. They all have different devices, OSes, screen sizes, default languages, assistive technologies, user preferences… Because of this huge variety, having the structure of web pages (or apps) expressed in a language that is just for structure is essential.
It is truly incredible just how many kinds of users are out there consuming HTML. Especially when you start to consider things like search engine spiders and content scrapers, they enable some pretty incredible things (like, for example, democracy). These are all capabilities that, in essence, rely on semantic markup. And it’s not just HTML. It’s CSS too:
responsive design worked, because CSS allowed HTML to be device-agnostic.
When I hear MVP, I don’t think Minimum Viable Product. I think Minimum Viable Pie. The food kind.
A slice of pie is all you need to evaluate the whole pie. It’s homogenous. But that’s not how products work. Products are a collection of interwoven parts, one dependent on another, one leading to another, one integrating with another. You can’t take a slice a product, ask people how they like it, and deduce they’ll like the rest of the product once you’ve completed it. All you learn is that they like or don’t like the slice you gave them.
If you want to see if something works, make it. The whole thing. The simplest version of the whole thing – that’s what version 1.0 is supposed to be. But make that, put it out there, and learn. If you want answers, you have to ask the question, and the question is: Market, what do you think of this completed version 1.0 of our product?
This whole post is so good:
Don’t mistake an impression of a piece of your product as a proxy for the whole truth. When you give someone a slice of something that isn’t homogenous, you’re asking them to guess. You can’t base certainty on that.
That said, there’s one common way to uncertainty: That’s to ask one more person their opinion. It’s easy to think the more opinions you have, the more certain you’ll be, but in practice it’s quite the opposite. If you ever want to be less sure of yourself, less confident in the outcome, just ask someone else what they think. It works every time.
If you’re someone who hasn’t touched a Windows machine in years—like myself—Dave’s commentary on switching back to Mac has some really good experiential perspective, including this insight on how access to the web might be universal, but the tools we use to build for it are not.
While the Web is Universal, the tools are not...Tools failed me this time around and I had to change my life to maintain progress. I know ubiquitous support is hard, but it’s so so so important for the Web that we keep the doors open and meet people where they are, meet them on their devices.
Gruber notes this crucial distinction about the design of Growl’s notification system:
Growl...served the notifyee, not the notifier, and that made all the difference.
I loved Growl. It was the notification polyfill for OSX: designed to become obsolete.
The evaluator, which determines the meaning of expressions in a programming language, is just another program.
I honestly did not understand much of this talk. But what stood out to me the most—and what I wanted to note down—was the insight stated above. Or, to restate it from another point in the talk:
A program can have another program as data.
I've always loved that moment when someone shows you the thing they built for tracking books they've read or for their jewelry business. Amateur software is magical because you can see the seams and how people wrestled the computer. Like outsider art.
I love making “amateur software” and “outsider art” as described here. The longer I work on the web, the more interested I find myself in my amateur, outsider software than anything else more “professional” that I’ve been employed to do in my career.
Gruber gives a fitting commentary on algorithms:
Blaming [this mess] on “the algorithm” is such ridiculous bullshit. What is an algorithm? It’s a set of rules and heuristics, created and decided by people. Blaming this on “the algorithm” is a shameless attempt to insinuate that they just put everyone into a system and the mean old computer decided to put front-line residents at the end of the list, when in fact, what they mean is, the people at Stanford who created the rules decided to put them at the end of the list. That’s their algorithm.
Simplicity does not precede complexity, but follows it.
Motto for a research laboratory: What we work on today, others will first think of tomorrow.
The proof of a system's value is its existence.
Having that utopian vision of the world is important though. And being optimistic about making enormous change is important, too. But I’m learning that the truly wise folks hold that vision in their minds whilst making tiny incremental progress in that direction every single day
This reminded me of my experience learning to play the piano. You want to start out playing the incredible pieces written for piano—Beethoven’s “Moonlight Sonata”, Debussy’s “Clair de Lune”, Liszt’s “Hungarian Rhapsody No. 2”—but you quickly realize you can’t. So instead you practice over and over and over. Every day. And every day you practice, you can barely notice any improvement from when you started that day. But as time goes by, you notice drastic improvements week over week, month over month, year over year. Tiny, incremental, accumulative progress towards a goal is a powerful thing.
these days, i'm trying to not define myself by what i make, or what people pay me for.
Lovely new portfolio site.
First of all, what is it?
Ephemeralist [is] a web page that...pulls archives from places like the MoMA and the Smithsonian, and allows you to scroll through history—from books and fossils, to pictures of donkeys from the 1700s.
Paul talks about why he created the site on the Postlight Podcast:
I kinda did it just so when I’m going to bed I would have something to look at that would be distracting...And what’s better than old art and ridiculous ephemera? I like a lot of historical nonsense...
This feels relevant to reinventions that attempt to make the web faster (like AMP) vs. building in a leaner, more purposeful way with the tools we already have which have been optimized for performance gains.
There’s a general observation here: attempts to add performance to a slow system often add complexity, in the form of complex caching, distributed systems, or additional bookkeeping for fine-grainedincremental recomputation. These features add complexity and new classes of bugs, and also add overhead and make straight-line performance even worse, further increasing the problem.
When a tool is fast in the first place, these additional layers may be unnecessary to achieve acceptable overall performance, resulting in a system that is in net much simpler for a given level of performance.
That kind of perfectly describes Google’s AMP, does it not? It was an attempt to make the web faster, not by encouraging the proper use of web technologies already available but by reinventing the wheel, even to the point of changing URLs.
This in-depth analysis of loading scripts in the browser is filled with nerdy technical details.
For example, Tim tells this story about a team that was loading a giant bundle of JavaScript using the <script async>
tag. To try and improve performance, they ruthlessly cut down the amount of JavaScript being shipped to the browser. They got the file size way down and...performance got worse! Do you know why? Watch this video to find out—or I’ll just tell you why: even though it was async
, the giant mass of JavaScript was blocking the parser when it arrived, and because it was so much smaller than before, it was arriving earlier and thus blocking the parsing of the HTML document earlier and giving the appearance of slower performance. Hence Tim’s statement at one point in his talk:
Don't take anything as gospel. There will always be tradeoffs.
it's easy to write code you can understand now, but hard to write code you'll understand in six months. The best engineers I've worked with aren't the best because they know every API method under the sun, or because they can turn five lines of code into two with a clever reduce call, but because they write code that they (and their colleagues) can understand now and code that can be understood in the future...
How do these engineers get this ability? Experience. They don't foresee problems because they are able to look into a crystal ball...but because they've been there, done that, countless times.
A good reminder that we’re all here to fail. An “experienced” developer, designer, manager, etc., is just someone who has failed a lot—and learned from it.
Gruber’s review of the M1 MacBook Air has this nugget which feels so relevant to product and software:
What you need to understand is that the best aspects of these Macs aren’t benchmark-able. It’s about how nice they are. The cooling system never making any noise doesn’t show up in a benchmark. I suppose you could assign it a decibel value in an anechoic chamber, but silent operation, and a palm rest that remains cool to the touch even under heavy load, aren’t quantities. They’re qualities. They’re just nice.
We’re always trying to quantify things that we can measure in order to show, with objective data, that they improved. But insanely great human-to-computer interaction isn’t solely a science. It’s also an art, which means the qualities you can’t or don’t measure have a huge impact.
This line resonates:
design applications have made it much easier for designers to work together; development applications have made it easier for developers to work together.
But...the gap between each discipline’s workspace hasn’t changed significantly.
I loved this little thought on the power of an <a>
link. People are literally employed to write emails asking domain owners for an <a>
link to their site in exchange for $$$.
A link, on the open internet, is a vote. It’s your way of saying, “this is great, and more people should know about it.” We talk about how much power the search engines have, but if you think about it, the search engines listen to us. They see what we link to, what we click, and how long we stay. At the end of the day, we are the curators of what gets surfaced on the internet.
This is, at least unconsciously, part of the reason why I indexed my blog’s links: it’s a reminder of the votes I’ve cast on the internet.
I continue to occasionally get emails from marketers asking me to link to their stuff. And while it’s annoying to receive spam, it’s also a reminder of the power I wield, just by having an independent website where I can link to whatever I want.
The power of links! Independent websites: seize the means of search rankings!
When someone says “hey, this design doesn’t make sense” it’s so very difficult for that not spiral into “wow, I’m a terrible person huh!”
I feel this. Less so now than when I was younger, but still. And not even just in design critique, just life critique. But even knowing that it’s a constructive critique doesn’t always help with processing it. As Robin says, “I know design critiques aren’t about me, so why do they still hurt?”
First, there’s this note on Conway’s Law (“You ship your org chart”):
The original wording from Melvin Conway goes: “Any organization that designs a system (defined broadly) will produce a design whose structure is a copy of the organization’s communication system.”...
Think about any complex product you like – it could be your phone, your car, a public transit system; whatever. That product is composed of many different parts, and sub parts, all the way down to tiny little atomic units that feel like indivisible “chunks” of product. Conway’s law is an observation about the contours of those chunks of product....boundaries between chunks of product mirror communication boundaries inside the org.
Then there’s this point about software saturation:
there is so much software...I forget who said this – someone smart on Twitter – but your mental idea of the software business changes when you realize that the primary customer of software is becoming other software.
This stuck out to me because just a couple days before I had seen GitLab’s “Tech Stack Details” where they openly enumerate (all?) of the software they use. The list is huge, over 160 pieces of software.
An interesting observation on how our digitally-saturated lives continue to favor and connect to physical world representations. Of dust we truly art I suppose:
It’s so interesting that we designers are all using these mockup templates to sell work through. Big agencies do it. Individual practices (like me) use the same files. Yet nothing ever gets physically produced. The work stays digital, but we need the mystique of physical production to get the kind of alignment necessary for clients to say yes. Nobody ever fell in love with a logo by seeing it mocked up in an email signature. We still emotionally favor the material world even if our branding strategies and marketing budgets shit-canned it ages ago.
Data can blind you:
As Andy Davies puts it, “Site analytics only show the behaviour of those visitors who’ll tolerate the experience being delivered.” Sessions like this are extremely unlikely to ever show up in your data. Far more often than not, folks are going to jump away before analytics ever has a chance to note that they existed.
And later:
We build an experience that is completely unsuable for them, and is completely invisible to our data...we look at the data and think, “Well, we don’t get any of those low-end Android devices so I guess we don’t have to worry about that.” A self-fulfilling prophecy.
Observations from someone who has been building for the web for 20 years.
We keep engineering more complexity and then using complex solutions to abstract it away for a while until we build new complexity on top of the new abstractions. If we zoom out enough, the overall pictures looks less like overall simplification.
It’s interesting to view “progress” through this lens (screenshot from talk): a pile of abstractions, each hiding the previous layer’s complexity. Either way, Kyle’s sentiment resonates with me quite often:
I'm not tired of building products for the web. I'm tired of being a modern JavaScript developer...Most days, I don't seem to be very good at my job.
This piece of writing was enough to interest me in buying the book. It sounded great, even though I’ve never heard of Bob Moesta. These kinds of insight cut through so much of the cruft of making software:
Everyone’s struggling with something, and that’s where the opportunity lies to help people make progress. Sure, people have projects, and software can help people manage those projects, but people don’t have a “project management problem.”...Project management is a label, it’s not a struggle.
What kind of struggles do people have?
People struggle to know where a project stands. People struggle to maintain accountability across teams. People struggle to know who’s working on what, and when those things will be done. People struggle with presenting a professional appearance with clients. People struggle to keep everything organized in one place so people know where things are. People struggle to communicate clearly so they don’t have to repeat themselves. People struggle to cover their ass and document decisions, so they aren’t held liable if a client says something wasn’t delivered as promised. That’s the deep down stuff, the real struggles.
Protoype, demo, repeat. Yes, yes, yes!!
We like to think about this process as the game discovering itself over time. Because as iterators, rather than designers, it’s our job to simply play the game, listen to it, feel it, and kind of feel out what it seems to want to become - and just follow the trails of what’s fun. — Seth Coster, “Crashlands: Design by Chaos” (GDC 2018)
Then:
Interestingly the designer’s role shifts a bit from creative overlord to active listener. They must be attentive to what the game (via play testers) is “saying”. They must be willing to explore those more interesting aspects, abandoning bad ideas and letting go of their initial ideas along the way. I love this methodology and it’s not dissimilar to how we build websites at Paravel, iterating and oversharing our works-in-progress in Slack.
And last this bit of wisdom was maybe my favorite, though perhaps not directly related to the subject at hand:
Don’t throw your pearls to swine by tweeting things to randos on Twitter.
I’ve been feeling the need for these kinds of words lately:
Raise the speed
Raise the quality
Narrow the focus
“Everybody wants results, but not everybody wants to do what that takes.”
Which links to this article (which, to be honest, I only skimmed because it’s quite long and full of CEO-speak):
As a leader, your opportunity is to reset in each of these dimensions. You do it in every single conversation, meeting, and encounter. You look for and exploit every single opportunity to step up the pace, expect a higher quality outcome, and narrow the plane of attack. Then, you relentlessly follow up...
An interesting observation from James which pits the idea of a “design system” as a completed, packaged object, against the idea of “systematic design” which is more of a mindset that transcends individual objects.
I prefer to talk about systematic design.
So what I mean by systematic design is designing only the things you need, but in a systematic way so that anything you need in future can build on the system you are building. So it's not a finished thing. I think a design system to me sounds like a product which is finished, and that you hand over to somebody for them to kind of take on.
I think a design system sounds like quite an intimidating product, whereas systematic design is something that anybody can get involved with at any point.
For another level of ironic, consider that while I think of a 50kB table as bloat, this page is 12kB when gzipped, even with all of the bloat. Google's AMP currently has > 100kB of blocking javascript that has to load before the page loads! There's no reason for me to use AMP pages because AMP is slower than my current setup of pure HTML with a few lines of embedded CSS and the occasional image, but, as a result, I'm penalized by Google (relative to AMP pages) for not "accelerating" (deccelerating) my page with AMP.
I actually really enjoy the “design” (how it looks + works) of danluu.com. Also enjoyed this fact check:
The flaw in the “page weight doesn’t matter because average speed is fast” is that if you average the connection of someone in my apartment building (which is wired for 1Gbps internet) and someone on 56k dialup, you get an average speed of 500 Mbps.
While I didn’t agree with necessarily everything in this piece, I really loved the way it started:
The web’s evolution over the last decade has mirrored the American economy. All of the essential indicators are going “up and to the right,” a steady stream of fundamental advances reassure use that there “is progress,” but the actual experience and effects for individuals stagnates or regresses.
Nothing helps you think you’re on the right path like seeing a graph that goes up and to the right representing any aspect of the thing you’re doing.
Software feels more like assembly than craft...I’ll do glue work when it creates many times the value than the same time spent on craft. It’s fine, I can craft on the weekends, for play.
Site note: Jess’ hero image for this post made me read the title as “Back when software was a cat”
There’s nothing new under the sun. Ideas abound, but execution is everything.
So that’s why I won’t sign your NDA. It’s not because I don’t like you, it’s not because I want to steal your ideas, it’s not because what you’re up to isn’t important.
It’s because the ideas you are likely to share with me over coffee or in a phone conversation are otherwise plentiful, worthless in isolation, and, to some degree, completely unoriginal and already known to the world.
A great explanation of behind the metaphor of “tech debt”.
I coined the debt metaphor to explain the refactoring that we were doing...It was important to me that we accumulate the learnings we did about the application over time by modifying the program to look as if we had known what we were doing all along...The explanation I gave to my boss, and this was financial software, was a financial analogy I called “the debt metaphor”
With borrowed money you can do something sooner than you might otherwise, but then until you pay back that money you’ll be paying interest. I thought borrowing money was a good idea, I thought that rushing software out the door to get some experience with it was a good idea, but that of course, you would eventually go back and as you learned things about that software you would repay that loan by refactoring the program to reflect your experience as you acquired it.
A lot of bloggers at least have explained the debt metaphor and confused it, I think, with the idea that you could write code poorly with the intention of doing a good job later and thinking that that was the primary source of debt. I'm never in favor of writing code poorly, but I am in favor of writing code to reflect your current understanding of a problem even if that understanding is partial.
The cynics will say that the pendulum will swing back to flat in a few years and that it’s all part of the cyclical nature of design trends. I don’t think the pendulum swings all the way back. Sure, there’s an aspect of fashion to design—what’s in vogue today won’t be tomorrow—but I think this is more about the scale balancing out than it is the pendulum swinging back. Fun and “judicious expressiveness” is back to stay—not because fashions have changed but because it has value. It took us losing it to realise that.
Shameless plug: I helped Michael write those particular lines, so is it cheating to cite this here on my blog? Either way, I agree with Michael and am excited about the reintroduction of fun into visual design.
An interesting perspective from a technical director at the W3C on how and why CSS came to be what it is today. What struck me were the similarities between building platform features for the web and building just about any other piece of software. Read these excerpts and try to tell me they don’t sound exactly like any other piece of software you’ve ever worked on:
Once a feature is in place, it’s easier to slightly improve it than to add a new, better, but completely different feature that does the same thing.
This also explains why the first two improvements to specifying color in CSS—a named-color system and a hue-wheel, polar notation—were adopted over much better, but more complicated, systems proposed at the same time. They were slight improvements, seen as easy to implement.
The idea lost momentum, and we chose the path of least resistance instead.
After a week of mailing list discussion and no suggestions for improvement, the consensus was that my syntax was good enough for now.
A nice articulate piece on craft and culture.
Linux proved that there is no upper limit to how much value you could extract out of a message board or email list, if you got the social dynamics right. The internet made it easy for craft practitioners to find one another, fraternize and argue over methods and best practices, almost like artists. The fact that none of these people had ever met in person, or had any shared culture or life experience, made zero difference. Their craft was their shared culture.
I suspect that within a few years, we (and others) will go through a complete rethink of how hiring works, that’s re-oriented around craft: how do we celebrate it, how do we communicate the ways that we celebrate it, how do we find people who crave celebration of that very specific thing, and then how do we hire them, wherever they are?
Craft is culture. If you care about craft, you’ve done the hard part.
Honestly, I just really liked this line because it felt very relevant.
from an outsider’s perspective the enterprise/startup web developer job looks...mostly dedicated to re-building things the browser already does, just with a ‘this is our brand’ corporate spin.
So what happens is that developers learn best practices from the previous generation and they try to follow them. Because there were concrete problems and concrete solutions that were born out of experience. And so the next generation tries to pass them on. But it's hard to explain all this context and all this trade off, so they just get flattened into these ideas of best practices and anti-patterns.
And so they get taught to the new generation. But if the new generation doesn't understand the trade offs and the reasons they came to these conclusions, they don't have the context to decide when it's actually a bad idea and how far can you stretch this. So they run into their own problems from trying to take these best practices and anti-patterns to the extreme. And so they teach the next generation.
So what to do about it?
I think one way to try to break this loop is just when we teach something to the next generation, we shouldn't just be two-dimensional and say here's best practices and anti-patterns. But we should try to explain what is it that you're actually trading away. What are the benefits and what are the costs of this idea?
Jeremy’s comments on Open Prioritization, which is an experiment in crowd-funding prioritization of new features in browsers.
when numbers are used as the justification, you’re playing the numbers game from then on.
He is speaking about monetary justification in arguments, but I saw a corollary in data-driven decisions. Once you make a product decision based purely on data, it becomes hard to ever deviate from or change that decision. “But the data said we should...” is the argument. Or “what does the data say?” becomes the leading question on decision making. Data is a cruel master.
He continues:
You’ll probably have to field questions like “Well, how many screen reader users are visiting our site anyway?” (To which the correct answer is “I don’t know and I don’t care”
Sometimes I wish more product decisions were made on principles and values like this more than the crutch of data.
If you tie the justification ... to data, then what happens should the data change? ... If your justification isn’t tied to numbers, then it hardly matters what the numbers say (though it does admitedly feel good to have your stance backed up)...The fundamental purpose of [your product] needs to be shared, not swapped out based on who’s doing the talking.
I haven’t really played with Deno yet, but conceptually I love a number of its founding premises:
In order to publish a website, we don’t login to a central Google server, and upload our website to the registry. Then if someone wants to view our website, they use a command line tool, which adds an entry to our browser.json file on our local machine and goes and fetches the whole website, plus any other websites that the one website links to to our local websites directory before we then fire up our browser to actually look at the website. That would be insane, right? So why accept that model for running code?
The Deno CLI works like a browser, but for code. You import a URL in the code and Deno will go and fetch that code and cache it locally, just like a browser. Also, like a browser, your code runs in a sandbox, which has zero trust of the code you are running, irrespective of the source. You, the person invoking the code, get to tell that code what it can and can’t do, externally. Also, like a browser, code can ask you permission to do things, which you can choose to grant or deny.
With Deno, there is no package manager. You only need HTTP.
The HTTP protocol provides everything that is needed to provide information about the code, and Deno tries to fully leverage that protocol, without having to create a new protocol.
Also this bit:
This leads us to the Deno model, which I like to call Deps-in-JS, since all the cool kids are doing *-in-JS things.
This is a really interesting conceptual look at what Deno is doing and how it’s different. I like it. It feels very “webby”.
Found this via Dave’s blog and the source article is a wonderful read that starts like this:
This memo documents the fundamental truths of networking for the Internet community. This memo does not specify a standard, except in the sense that all standards must implicitly follow the fundamental truths.
What follows is a number of half truths, half jokes, informed by years of experience. Like this one:
(3) With sufficient thrust, pigs fly just fine. However, this is not necessarily a good idea.
A number of these truths, rules, whatever they are, I encountered just this week. In fact, I see many of them every week:
(5) It is always possible to aglutenate multiple separate problems into a single complex interdependent solution. In most cases this is a bad idea.
And
(8) It is more complicated than you think.
A great read.
Oh hey, another article from Dave. But it just resonated with me so much!
We don’t really make software architecture decisions based on some rigorous cost/benefit analysis. Decisions are often more informed on existing biases, past experiences, and the tradeoffs people find most comfortable. Decisions also get slipped in under the cover of self-imposed sprint deadlines...sometimes, it seems, the act of making a decision or the need to “unblock” something gets elevated over the impact of the decision.
I think this is where the second implication of Tesler’s Law comes into play: “Who will inherit the complexity?” Is it a value or a cost that gets passed on to the user? It’s a simple question, but the answer dictates so much.
This really resonates—so much decision making gets made around how the teams building the software organize themselves, communicate, and basically work. And the outworking of those environments, those processes, is what often frames our decision making.
If the browser notices the server supports HTTP/2 it sets up a HTTP/2 session (with a handshake that involves a few roundtrips like what I described for DNS) and creates a new stream for this request. The browser then formats the request as HTTP/2 wire format bytes (binary format) and writes it to the HTTP/2 stream, which writes it to the HTTP/2 framing layer, which writes it to the encryption layer, which writes it to the network socket and sends it over the internet.
Fascinating look at all that’s happening to merely visit something like website.com
. To be quite honest, it’s incredible any of it even works at all.
Tyler reflecting on the rationale and process in his color choices. In a world where we’re constantly creating static digital mockups as the artifacts for gathering consensus to build something, this is a good reminder to constantly question: is this design meant to be looked at, or actually used?
Letting contrast ratios influence aesthetic decisions can be a little uncomfortable. As an experienced designer, I have a trained eye that I trust to choose colors that work well and look good. But, that’s not the whole story. My instincts towards subtlety often lead to colors that look fantastic, but are low in contrast. Low contrast text can be difficult for people to see. Color needs more than my instincts alone. So I let go of a bit of control.
Letting go can produce great results. Results that make a design accessible and enjoyable to more people.
Evergreen excerpt to reference:
Before I stopped using twitter, and since starting it up again, I decided always to be positive while using it. Or rather, not to be negative. It was very easy to, for example, watch a slightly substandard film, pop online and go “Oh lol, just watched a terrible film, something, something.” While forgetting that people worked hard on it, and everything that goes along with how much effort goes into producing anything from start to end.
Based on some recommendations on Twitter, I started watching some Strangeloop talks. To be honest, I don’t fully grasp a lot of what’s being discussed in these talks, but there are little bits and pieces I find and note down.
In this particular talk, Jamie points out an interesting historical fact around regexes. My familiarity with regexes is: those big cryptic pattern matching strings I hopefully never have to deal with much. I found it interesting that what makes them scary (a regex that’s more than like seven characters long) is basically due to the over-extension of a constraint that shaped their original design.
This is an interesting point of origin: [regex's] usefulness on the command line and while editing led directly to how concise regular expressions are—which is great if you need to be concise. If you don't need to be concise it means that the syntax is rather cryptic.
Things we make will usually not be perfect. Absolute perfection is always elusive. However what makes us manifest palpable “perfection” is care. You can sense care, just as you can sense carelessness.
A cool story.
So I pulled up my website and asked if any of them knew what the browser developer tools were and no one did. So I explained how on any website they visit, they could right click, find "Inspect" and click on that and view the code for a website.
I opened the DevTools on my site and there was an audible gasp from the class and excited murmuring.
"That's your code?" A student asked.
"Yes, that's all my code!"
"You wrote all of that?!"
"Yes, it's my website."
And the class kind of exploded and starting talking amongst themselves.
Jeremy writing on how so many of our problems are problems of human communication and understanding, not technical problems.
We like to talk about how hard and complex our technical work is, but frankly, it’s a lot easier to get a computer to do what you want than to convince a human.
He continues:
let’s say it is someone in the marketing department who is pushing to have an obtrusive newsletter sign-up form get shoved in the user’s face. Talk to them. Figure out what their goals are—what outcome are they hoping to get to. If they don’t seem to understand the user-experience implications, talk to them about that. But it needs to be a two-way conversation. You need to understand what they need before you start telling them what you want.
I realise that makes it sound patronisingly simple, and I know that in actuality it’s a sisyphean task. It may be that genuine understanding between people is the wickedest of design problems. But even if this problem seems insurmoutable, at least you’d be tackling the right problem.
Sage advice.
Caroline is asked: “What’s the most consistent usability issue you see being repeated in forms?” Her answer (emphasis mine):
Without a doubt, it’s people thinking that they can solve the problems in forms by addressing the technology and interaction design issues. Yes, the technology must work and the interaction design has to be easy, but what it comes down to is why you are asking the questions. I constantly hear people saying that if they use this new technology, they’ll get better forms. But you won’t, not until you’ve worked out good questions, why you’re asking those questions and what you’re going to do with the answers. Changing technology will never solve the problem of asking a bad question.
It’s so easy to jump in and “fix” a form by changing its visual design or layout. But form follows content, and asking the right questions is the foundation of building great forms. But those questions have to be based on trust, which stem from the business not the design team (emphasis mine):
It’s all about value. I’m going to share a shocking secret with you: some people don’t always answer personal details truthfully on the internet. Forcing people into a situation where they’re already untruthful to your organisation because you didn’t respect their need for privacy, that starts them on the wrong foot. You’ve already set out on a damaged relationship with that customer. They didn’t trust you with their personal details, why will they trust you in the future? Respect [your users] and their privacy and their needs to reveal information when it seems relevant to what they’re doing – not before. I’ve never seen people reluctantly put in their true address when they got to the point of buying something and it asked for a shipping address.
First up, the difficulty of CSS reminds me of the difficulty of book design... When designing a book we have to treat the InDesign file as a sort of best guess, it’s not until we print the dang thing that we begin to see all the problems...
So the best thing to remember when designing a book is the same as when designing a website: the screen is a lie.
What we’re seeing on screen and what the final product will become are two very different things. We need to constantly remind ourselves that there are invisible edge cases, problems that in this context, on this screen, are made utterly invisible to us.
“the screen is a lie”—I loved this phrase when I read it. Perhaps everything in this article will be apparent to anyone who’s been working on the web for years, but it’s definitely not apparent for people who haven’t (i.e. “stakeholders”). I’ve already found myself trying to distill the essence of this article into my introductory statements to stakeholders before presenting design mocks, like “ok, remember everyone, that what you’re going to see is actually a lie. It does not represent the final product, nor does it represent the finality of the product. There are problems and edge cases that are entirely invisible to us in these mocks. This is just a glimpse of a very particular, targeted solution.”
Oh, and I liked this snippet about the web being messy. Once you embrace the messiness, it’s no longer a pain point. In fact, it’s actually a strength you enjoy capitalizing on.
I think everyone hates CSS for forcing them to be empathetic but also because the web is so messy—despite that being the single best thing about it.
Had this talk sitting in my “To Watch” list for a long time. It’s full of little insights. I really like the speaker’s presentation of this quote from Marshal McLuhan:
All media are extensions of some human faculty, mental or physical. The wheel is an extension of the foot. The book is an extension of the eye. Clothing is an extension of the skin. Electric circuitry is an extension of the central nervous system. The extension of any one sense displaces the other senses and alters the way we think, the way we see the world, and ourselves. When these changes are made, men change.
Frank talks about his process of selecting a typeface for his blog.
No matter how much one plans, a designer will crawl through their mental rolodex of fonts and see what feels right to their eye. Post-rationalization is an open secret in the design industry, but with personal work, there is no one to impress with rigor. One can go on intuition. The eye knows.
“Post rationalization is an open secret in the design industry.” Love this line. It’s funny because Frank says it’s an open secret but I can’t actually remember ever seeing it written down anywhere.
Really I just liked these quotes. They seem so obvious when written down that they’re hardly worth noticing, but I’m not sure how many of us that work in software have truly internalized this truth such that we could articulate it clearly. I want to, which is why I’m writing this down.
there is an intrinsic tension among opposing forces that every engineering project must solve.
There is the desire to reduce costs; opposing that desire is the need to develop a reliable product with all the features and functionality needed to make it a success.
And this:
Build a little, test early and often so that mistakes are caught early and so stakeholders have a chance to see and respond, and so that the inevitable requirement changes are identified and absorbed during the process, not in a terrified rush at an acceptance test.
This article was making the rounds on the internet. Personally I think there’s lots of good technical advice in there, but also lots of more broad advice about developing anything as a product—including design systems (whatever you define that to be).
When we are trying to drive change, we do so with a customer-centric attitude towards the teams trying to understand and use a new system. Their happiness is the only real barometer of your success. This involves outreach, requirements gathering, feedback, iteration, and purposeful education and skill-sharing.
When in doubt, remember: you’re accountable for your team’s technical success, and your team’s technical success is–in the long run–judged by the people using your stuff.
Reflecting on the 10th anniversary of his book The Shallows
Welcome to The Shallows. When I wrote this book ten years ago, the prevailing view of the Internet was sunny, often ecstatically so...In a 2010 Pew Research survey of some 400 prominent thinkers, more than 80 percent agreed that, “by 2020, people’s use of the Internet [will have] enhanced human intelligence; as people are allowed unprecedented access to more information, they become smarter and make better choices.”
The year 2020 has arrived. We’re not smarter. We’re not making better choices.
Then later:
When it comes to the quality of our thoughts and judgments, the amount of information a communication medium supplies is less important than the way the medium presents the information and the way, in turn, our minds take it in. The brain’s capacity is not unlimited. The passageway from perception to understanding is narrow. It takes patience and concentration to evaluate new information — to gauge its accuracy, to weigh its relevance and worth, to put it into context — and the Internet, by design, subverts patience and concentration.
A collection of general principles the folks at Basecamp try to keep in mind when communicating. There’s a lot of really great stuff in there. I’ve only surfaced a few here that feel particularly relevant to me in February 2020.
Writing solidifies, chat dissolves. Substantial decisions start and end with an exchange of complete thoughts, not one-line-at-a-time jousts. If it's important, critical, or fundamental, write it up, don't chat it down.
Speaking only helps who’s in the room, writing helps everyone. This includes people who couldn't make it, or future employees who join years from now.
"Now" is often the wrong time to say what just popped into your head. It's better to let it filter it through the sieve of time. What's left is the part worth saying.
A great analysis of choosing type and making creative decisions.
you can also pick by gut or chance once you’re certain you have a solid pool to choose from. Reasons can be arbitrary, and you need to leave room for whim. I once chose a typeface because I liked the 7. Sometimes one can overthink things.
I raise all this to show the natural limits of intent...Best to take those first big steps in the right direction, whittle down the options, and commit to what feels right to you. No choice is bulletproof, and no amount of evidence is ever going to completely clarify or validate a choice. This is what makes these choices creative.
Questioning the thinking that more collaboration is better:
Collaboration! So much effort goes into writing and talking about collaboration, and creating tools to facilitate collaboration and telecollaboration, with the tacit assumption that more collaboration is always better...Since communication overhead increases proportionally the square of the number of people on the team—a fact illuminated by Brooks in the 1970s—what you actually want is as little collaboration as you can get away with.
On the spurious nature of feature comparison:
Features don’t work, in the sense that they can be easily gamed. A brittle and perfunctory implementation, done quickly, is going to score more intramural brownie points over a robust and complete one. If the question is does product A have feature X? then the answer is yes either way. This also makes features a spurious basis for comparison in competing products because you need to seriously examine them to determine the extent to which they are any good.
How to speed up software development;
Like any other creative endeavour, software development can’t be sped up as much as we can eliminate the phenomena that slow it down.
Viewing software development through an entirely new lens:
A development paradigm that can be construed from the outside as setting great store by speed—or, I suppose, velocity—is invariably going to be under continuous political and economic pressure to accelerate...If instead you asserted that the work amounts to continual discovery, it happens at one speed, and could potentially continue indefinitely, you might be able to pace yourself.
This little story resonated with my own experience so much, I just wanted to make note of it—a kind of “hey, somebody else feels the same as me.”
I remember sitting down one evening after work to focus on a side project and losing the best part of the evening trying to get two different tools that I'd chosen to use playing nicely alongside each other. I finished for the night and realised that I'd made no progress...Once I had everything playing nicely, one of the tools would have an update which broke something and I'd repeat the process all over again.
This post is about seven months old, but I hadn’t seen this trick before. I didn’t even need an explanation (though theirs is great). Just reading the markup dawned on me the elegance of this trick to loading a CSS stylesheet async.
<link rel="stylesheet" href="/path/to/my.css" media="print" onload="this.media='all'">
I love little tricks in HTML like this that would otherwise require who knows how many lines of JavaScript.
REQUIREMENTS. What an absolutely terrible word. “As per the requirements…” It sounds so authoritative. So final. But at the end of the day it’s just some designers or BAs making some calls (maybe informed by plenty of research, maybe not) and rolling with them.
A really great piece from Brad arguing for the inclusion of front-end developers as equal partners in the design process.
A more collaborative process saves an absurd amount of time, money, and anguish. Frontend developers can not only help better determine level of effort of any design solution, but offer suggestions, alternatives, and solutions that better follow the grain of the web. “If we move this from here to here, we can build it out in 2 hours. But if it stays the way it is, it will take about 2 weeks to implement.” Repeat that a couple dozen times and you just saved yourselves months worth of unnecessary work.
I think this observation is spot-on.
I ended up down a Malcom Gladwell rabbit hole the other day, which resulted in some interviews with him playing in the background while I worked and I liked this:
A special kind of insight comes from distance...I think we’re really reluctant to re-examine our conclusions about the past. There’s so much to be learned by simply going back and saying “we made up our mind about this [thing] that happened”, boom, the day after it happened. And then we just let it sit there and never went back to say, “well, was I right?” There are all kinds of things you can learn, years later, that can fundamentally change your understanding of history and how you reach your own conclusions.
Then later:
It brings to mind that famous epigram “history is written by the victors.” And that’s true. The account that you get in the first go-round is written by the guy who won. So one of the reasons it’s so important to go back and look at history again is you have to give other people a chance to speak. People are willing to be honest with the passage of time.
Is it my job to be realistic and empathetic to constraints, or to be the persistent voice of the user who makes stuff better at the cost of momentum...? As with most things, it depends.
An honest look, I felt, at the reality of being a designer.
We have to learn to push for the impossible while navigating and respecting the constraints of the people and organisations we work with.
Concept designs (and worse, concept videos) are a sign of dysfunction and incompetence at a company. It’s playing make-believe while fooling yourself and your audience into thinking you’re doing something real. Concepts allow designers to ignore real-world constraints: engineering, pricing, manufacturing, legal regulations, sometimes even physics. But dealing with real-world constraints is the hard work of true design. Concepts don’t stem from a lack of confidence. They stem from a dereliction of the actual duties of design.
Later:
Designing at the limits of possibility is one thing; designing unbounded by reality is another.
I just liked this one excerpt. I think there’s a key differentiation between “adding white space” and “removing stuff”:
Removing or excluding elements from a web page necessarily leaves empty space. So that space is not an action that you do, but a result that you get through throwing unnecessary elements out, because you don’t need more space. but you need less stuff.
Loved this paragraph from Carr:
When, in 1965, an interviewer from Cahiers du Cinema pointed out to Jean-Luc Godard that “there is a good deal of blood” in his movie Pierrot le Fou, Godard replied, “Not blood, red.” What the cinema did to blood, the internet has done to happiness. It turned it into an image that is repeated endlessly on screens but no longer refers to anything real.
Apparently Martin Scorsese threw some shade—at least that’s how some people saw it—at the Marvel films:
I’ve tried to watch a few of them and they’re not for me, they seem to me to be closer to theme parks than they are to movies as I’ve known and loved them throughout my life, and in the end, I don’t think they’re cinema.
He then wrote an opinion piece to clarify what he was trying to say:
There’s worldwide audiovisual entertainment, and there’s cinema. They still overlap from time to time, but that’s becoming increasingly rare. And I fear that the financial dominance of one is being used to marginalize and even belittle the existence of the other.
Read his words how you want, but one of my interpretations is: data-driven movie making has ruined cinema:
everything in [The Marvel movies] is officially sanctioned because it can’t really be any other way. That’s the nature of modern film franchises: market-researched, audience-tested, vetted, modified, revetted and remodified until they’re ready for consumption.
“But the data proves that’s what the people want!” He addresses that:
And if you’re going to tell me that it’s simply a matter of supply and demand and giving the people what they want, I’m going to disagree. It’s a chicken-and-egg issue. If people are given only one kind of thing and endlessly sold only one kind of thing, of course they’re going to want more of that one kind of thing.
He concludes:
In the past 20 years, as we all know, the movie business has changed on all fronts. But the most ominous change has happened stealthily and under cover of night: the gradual but steady elimination of risk.
I wonder if this is happening in pockets software design and development, for better or worse...
This piece is quite an exhuastive look at why the folks behind Go version the way they do. I found the entire thing quite an interesting analysis of semver and versioning software. So if you’re interested in that kind of computer science stuff, read the whole thing!
While the article deals specifically with the topic of versioning in software, I found this commentary about code aesthetics to have many parallels to design. I thought it was a good articulation of how I feel about keeping links underlined—and in many cases the default “blue”—on the web.
The most common objection to semantic import versioning is that people don’t like seeing the major versions in the import paths. In short, they’re ugly. Of course, what this really means is only that people are not used to seeing the major version in import paths.
...
Both these changes—upper-case for export and full URLs for import paths—were motivated by good software engineering arguments to which the only real objection was visual aesthetics. Over time we came to appreciate the benefits, and our aesthetic judgements adapted. I expect the same to happen with major versions in import paths. We’ll get used to them, and we’ll come to value the precision and simplicity they bring.
What would the www be like if everyone kept their links underlined?
There are two types of software companies: those that ship code that embarrasses their engineers and those that go bankrupt.
Based on my experience thus far in my career, I would agree with this statement. Granted it’s a black and white statement, but if you read between the lines, the essence here resonates with me.
I think a corrollary to design would quite frequently hold to be true as well: “there are two types of software companies: those that ship products that embarass their designers and those that go bankrupt.”
A beautiful piece that ruminates on the experience of music as it was before the iPod. Back then, music was an experience that shaped your identity, your life! And now that experience is completely gone, except for those of us who remember it. We are vessels of the cassette.
Physicality [the cassette tape] feels like an investment in something: a relationship with a piece of work that I'll endeavour to like. If I decide I don't like it, I will be sure of that, having tested more thoroughly than if it was one of hundreds of Spotify album samplings.
Maybe it’s just nostalgic ramblings, but I agree with his conclusion: “[I enjoy] music in more ways than one, and I feel much richer for it.”
the tech industry prefers the word “ethics” over morals
Why? Because:
“Ethics” is nice. Morals are uncomfortable.
“Ethics” is less binding. They feel more abstract, neutral, less scary, less obligatory. Morals command.
“Ethics” is abstract. Morals are concrete.
Overall, a bit rambling in spots but had some interesting insights I think.
Gruber’s commentary on Twitter’s apparent forway into creating an open source standard for social media. What I liked was John’s analysis of prescriptive vs. descriptive specs.
XHTML was a boil-the-ocean plan to create a new version of HTML, its creators’ ideal for how HTML should be used — a prescriptive spec. HTML5 took the approach of standardizing how HTML already was being used — a descriptive spec. We all use HTML5 today.
Nicholas Carr with another great analysis. This time he points his lens at the “influencer”:
Marketing has displaced thinking as our primary culture-shaping activity, the source of what we perceive ourselves to be. The public square having moved from the metaphorical marketplace of ideas to the literal marketplace of goods, it’s only natural that we should look to a new kind of guru to guide us.
Then later:
The idea that the self emerges from the construction of a set of values and beliefs has faded. What the public influencer understands more sharply than most is that the path of self-definition now winds through the aisles of a cultural supermarket. We shop for our identity as we shop for our toothpaste, choosing from a wide selection of readymade products. The influencer displays the wares and links us to the purchase, always with the understanding that returns and exchanges will be easy and free.
A great post. Read the entire thing.
I find something poetic in the fact that the dependencies I rely on the most will eventually not be needed.
I’ve actually been thinking about this the past couple years. Dave put this feeling into words here.
So much of my web dev work for the last couple years, both via my employer and on personal projects, has been around trying to make conscientious decisions about what and why I include as a dependencies in a project. For personal projects, I’ve been trying to get to “dependency zero” (where reasonably possible) or close to it. When I do bring in deps, I try to architect my project and use the dependencies in such a way that whatever code I write and tool I compile with, one day I’ll be able to remove that dependency entirely from my project and not have to touch a single line of code other than in my package.json
. I’ve been able to do that a few times and let me tell you: that is a nice feeling.
Most design content has become poor quality, surface-level content marketing that does more damage than good, because it offers over-simplified, misinformed perspectives dressed up as guidance. One hardly gets the sensation of lived experience and professional acumen in the words.
Love that articulation. Love all of Frank’s words. Looking forward to following this little project he’s started.
This is a little excerpt from an online book about “modest JavaScript”. I liked this particular paragraph which brings into focus the more general meaning of the word “dependency” then circles back to its usage in the context software:
The idea of dependencies in software writing goes back a while...But being dependent, that’s a concept with an even longer history. Including dependencies into your project feels like a win (all of this work you don’t have to do), but depending on other people’s work doesn’t feel so much like a win anymore. Being dependent: that doesn’t feel good at all.
An interesting post as always from Jeremy, but this line:
I know that it would make my life as a developer harder. But that’s of lesser importance. It would be better for the web.
I like Shawn and the unique perspectives he brings about learning to the world of webdev. I haven’t listented to this entire podcast yet, but I liked this excerpt from the transcript:
“You can learn so much on the internet for the low, low price of your ego.” If you keep your identity small, you can remain open to new ideas. If you make what you know a part of your identity, being receptive to new ideas and accepting that you were wrong becomes challenging.
The Web is smothering in useless images. These clichéd, stock images communicate absolutely nothing of value, interest or use. They are one of the worst forms of digital pollution because they take up space on the page, forcing more useful content out of sight. They also slow down the site’s ability to download quickly. In the last ten years, webpages have quadrupled or more in file size, and one of the primary reasons for this is useless image proliferation. If organizations are filling their websites with these useless, information-free images, are they also filling their websites with useless, information-free text? Are we still in a world of communicators and marketers whose primary function and objective is to say nothing of value and to say it as often as possible? And whatever you do, look pretty.
The struggle is real.
Loved this imagined conversation:
“We have this contact form and we need a useless image for it.” “How about a family cavorting in a field of spring flowers with butterflies dancing in the background?” “Perfect.”
My wife shared this with me, commenting that I should think about this in the context of our young kids. With our 4 year old just about to reach the age where the social convention is you send them off to public school, we’ve been discussing topics like this.
Elites first confront meritocratic pressures in early childhood. Parents—sometimes reluctantly, but feeling that they have no alternative—sign their children up for an education dominated not by experiments and play but by the accumulation of the training and skills, or human capital, needed to be admitted to an elite college and, eventually, to secure an elite job.
One of the most fascinating things about the web is its “don’t break current implementations” ethos, which stands in direct contrast to just about every other piece of software ever made:
This permanence to the web has always been one of the web’s characteristics that astounds me the most. It’s why you can load up sites today on a Newton, and they’ll just work. That’s in such sharp contrast to, well, everything I can think of. Devices aren’t built like that. Products in general, digital or otherwise, are rarely built like that. Native platforms aren’t built like that. That commitment to not breaking what has been created is simply incredible.
Later:
as some frameworks are, just now, considering how they scale and grow to different geographies with different constraints and languages, the web platform has been building with that in mind for years.
Conclusion:
Use the platform until you can’t, then augment what’s missing. And when you augment, do so with care because the responsibility of ensuring the security, accessibility, and performance that the platform tries to give you by default now falls entirely on you.
An interesting look at the effects of UI design. What do you think culture would look like if we reversed these UIs? Praise required words while negativity was easily accessible via a single interaction? Who knows. Could be different. But also humans are humans and it could be the same.
First, a look at Facebook’s UI:
one negative reply literally takes up more visual space than tens of thousands of undifferentiated likes.
Then Twitter’s:
The arrangement is even worse on Twitter. Liking stays attached to the original tweet and makes most positive interactions static. Negative reactions must be written as tweets, creating more material for the machine. These negative tweets can spread through retweets and further replies. This means negativity grows in number and presence, because most positivity on the service is silent and immobilized.
Positivity is “silent and immobilized’. What an fascinating assessment—and the result of this?
like can’t go anywhere, but a compliment can go a long way. Passive positivity isn’t enough; active positivity is needed to counterbalance whatever sort of collective conversations and attention we point at social media. Otherwise, we are left with the skewed, inaccurate, and dangerous nature of what’s been built: an environment where most positivity is small, vague, and immobile, and negativity is large, precise, and spreadable.
I’ve kind of been following the development of optional chaining in JavaScript. It’s now stage 3, which had me re-evaluating my own thoughts on the syntax. @housecor has been a visible opponent of the syntax and I found this piece via a thread on his twitter. It has some good points specifically relevant to optional chaining, but even more broadly relevant to writing JS applications.
Trust in your data, and your code will be more predictable and your failure cases more obvious. Data errors are simpler to debug if an error is thrown close to the source of the bad data.
Unnecessary safety means that functions will continue to silently pass bad data until it gets to a function that isn’t overly safe. This causes errors to manifest in a strange behavior somewhere in the middle of your application, which can be hard to track...Debugging it means tracking the error back to find where the bad data was introduced.
And later:
Being overly cautious with external data means that the next person to consume it doesn’t know if it’s trustworthy, either. Without digging into the source to see how trustworthy the data is, the safest choice is to treat it as unsafe. Thus the behavior of this code forces other developers to treat it as an unknown, infecting all new code that’s written.
An absolutely wonderful piece on writing.
Matt Jones: “[Writers] are the fastest designers in the world. They’re amazing at boiling down incredibly abstract concepts into tiny packets of cognition, or language.” ... writing is part of every design. If you can clearly define what you’re making and articulate its value, the steps to bring it out into the world will go much faster.
This resonates about 1,000% with my experience.
Writing can be a tool for talking to ourselves when we’re still figuring things out. A sort of mirror or feedback system. A way to understand and articulate design.
When I sit down to write, I don’t usually know what I’m going to say. It’s only through the act of writing that it becomes clear that I need to say anything at all.
Quoting David foster Wallace who is talking about ordinary people of their craft being able to explain their craft
maybe being able to communicate with people outside one’s area of expertise should be taught, and talked about, and considered as a requirement for genuine expertise.
I originally discovered this via a link on Dave Rupert’s blog—along with his relatable commentary:
Whenever I read the original Agile Manifesto and it’s accompanying Twelve Principles, my soul leaps! But in practice inside enterprise Agile frameworks, my soul is often left crushed...In my experience, there seems to be a strongly held belief that if you obey certain rituals: have certain meetings, say certain words, pray certain prayers, commit to improbable deadlines; your product will enter the Promise Land. It’s hard for me to rectify what I know about software development with this religion. I have resigned myself to being an apostate.
However, I didn’t get around to listening to the source video until recently. It’s fantastic. The speaker is Martin Fowler, one of the original signers of the Agile Manifesto. The fact that he basically calls apostasy on what most of us likely participate in as the de-facto, day-to-day, shared implementation of agile, is striking.
with so many differences, how can we say there is one way that will work for everybody? We can’t. And yet what I’m hearing so much...is imposing methods upon people. That to me is a travesty.
Even the agile advocates wouldn’t say that agile is necessarily the best thing to use everywhere. The point is: the team doing the work gets to decide how to do it. That is a fundamental agile principle, which means that if a team doesn’t want to work in an agile-way, then agile probably isn’t appropriate in that context. And that is the most agile-way of doing things.
I can’t help be nod my head in agreement with Dave’s summary: “Fowler’s perspective and patience with the Agile Industrial Complex gives me a foothold to keep from falling into hopelessness.”
Your job as a leader isn’t to just help clarify thought process – but to give confidence in their thinking.
As Wade says, “You’re trying to just help them get to that realization that, ‘You know what to do.’”
They have some good suggestions on 16 questions you can ask to propel those doing the problem-solving, instead of jumping in to solve the problem yourself:
- What do you see as the underlying root cause of the problem?
- What are the options, potential solutions, and courses of action you’re considering?
- What are the advantages and disadvantages to each course of action?
- How would you define success in this scenario?
- How do you know you will have been successful?
- What would the worst possible case outcome be?
- What’s the most likely outcome?
- Which part of the issue or scenario seems most uncertain, befuddling, and difficult to predict?
- What have you already tried?
- What is your initial inclination for the path you should take?
- Is there another solution that isn’t immediately apparent?
- What’s at stake here, in this decision?
- Is there an easier way to do what you suggested?
- What would happen if you didn’t do anything at all?
- Is this an either/or choice, or is there something you’re missing?
- Is there anything you might be explaining away too quickly?
To use a GitHub backend with NetlifyCMS, you have to have your own server to handle OAuth. This is a requirement of GitHub’s authentication flow. The good news about that, is that it’s a standard OAuth flow. The bad news about that, is that it’s a standard OAuth flow.
This is what I love about Tyler’s writing. So approachable. He writes how my brain thinks and my heart feels when I’m trying to wrangle computers to do stuff.
What I needed to do was build my own server to handle the OAuth flow. This is a thing I’ve done and written about before. OAuth is like that for me. I set it up. Deploy it. Forget it. Then have to give myself a refresher to do again. That’s what the server example in this post is.
If you’re not following his writing, you should.
There is no path. Even in large organizations that have salary bands and matrices…there is no path. There are precedents that have been set by other humans, but none of those are your path. Your path is the only one that’s authentic to you, the one that gets you excited on a Sunday night about the next morning. Your path is super-connected to your values, the way you appreciate the world and the vision you have for your contribution to it. What are you here to do? And how can you be doing more of it?
Always excellent advice from Jen. You should follow her writing too.
Just a couple little excerpts that stood out as timely for me.
On complexity:
I have struggled with complexity my entire career. Why do systems and apps get complex? Why doesn’t development within an application domain get easier over time as the infrastructure gets more powerful rather than getting harder and more constrained? In fact, one of our key approaches for managing complexity is to “walk away” and start fresh. Often new tools or languages force us to start from scratch which means that developers end up conflating the benefits of the tool with the benefits of the clean start. The clean start is what is fundamental. This is not to say that some new tool, platform or language might not be a great thing, but I can guarantee it will not solve the problem of complexity growth. The simplest way of controlling complexity growth is to build a smaller system with fewer developers.
On designing a functioning organization:
One dirty little secret you learn as you move up the management ladder is that you and your new peers aren’t suddenly smarter because you now have more responsibility. This reinforces that the organization as a whole better be smarter than the leader at the top. Empowering every level to own their decisions within a consistent framing is the key approach to making this true. Listening and making yourself accountable to the organization for articulating and explaining the reasoning behind your decisions is another key strategy. Surprisingly, fear of making a dumb decision can be a useful motivator for ensuring you articulate your reasoning clearly and make sure you listen to all inputs.
An interesting look at what the author believes is “a trend towards self-indulgence” that “can be summed up in two words: developer experience”.
This is the idea that investing in the whims and wants of developers allows them to build faster and cheaper, thus helping them deliver a better product – eventually. The excitement developers exhibit towards new technology can be infectious, but a magpie-like behaviour sees them flit and flirt from one framework to another, abandoning what’s been tried and tested, and throwing scorn on anything perceived as outdated. And there’s always another developer-focused feature to implement before the user experience can be addressed. As the complexity of digital software grows and the size of websites increases (weighed down by client-side libraries and privacy-invading scripts), it’s safe to say this argument amounts to little more than trickle-down ergonomics.
I really liked that phrase, “trickle-down ergonomics”. The author continues:
And now designers are getting in on the act. Concerned with order and beauty, and with a low tolerance for inconsistency and a penchant for unachievable perfection, efforts are now expended on the creation of all-encompassing design systems. An honest appraisal would acknowledge that the intended audience for these is not the customer but their colleagues. After all, a user focused on achieving a particular task is unlikely to notice a few stray pixels or inconsistent padding.
On the non-user-friendliness of Linux:
Every operating system is a batch-card processing retro-mess underneath. Linux makes this a virtue to be celebrated rather than a sin to be hidden. I appreciate that. It’s nice not to have to pretend that computers actually are good, or work.
On the “personal” in “P.C.”:
My goal as a software person is to figure out ways to put “personal” back into the systems we discuss and build. “Efficient” or “slick” or “easy to deploy to AWS” are great things, but “empowering” and “gave me a feeling of real control” are even better.
There’s actually a lot of good stuff in here. It got the gears in my brain spinning on the possibilities for doing things “the old way” (I already wrote about some of that). But I wanted to save this particular quote about “unprogressive non-enhancement”.
You take some structured content, which follows the vertical flow of the document in a way that everyone understands.
Which people traverse easily by either dragging their scroll bar with their mouse, or operating the keyboard using the up and down keys, or using the spacebar.
Or if they're using a touch device, simply flicking backwards and forwards in that easy way that we've all become used to. What you do is you take that, and you fucking well leave it alone.
A lot of good reflection in here on Dan’s personal experiences. But what I really liked was this take on keeping a healthy perspective of your digital work in conjunction with the other things in life that are important:
One thing all of this [digital design work] has in common is that it’s all gone. It doesn’t exist anymore. Kaput. Deleted.
Now you can either get really depressed about how digital work is so disposable, or you can view that as a positive. That you can continue to reinvent yourself and your work.
Remember how important some of this stuff seemed at the time? Emergency meetings? Calls while on vacation? There are no lives at stake here. It’s here and then it’s replaced. Something I try to keep in mind when things start getting a little urgent and stressy.
...while pixels can disappear and your work is temporary, people and relationships stick around. Soon, you’ll realize they are the most important part of all of this. Long after the work is gone, if you do things right, you’ll have good people, friends, co-workers, future co-workers around you that will be much more valuable than the things you created.
While specifically targeted at designers and the design industry, I thought this was a rather (comedic) talk on the culture of the technology at large. It’s kind of written in the spirit of the old tale “The Emperor’s New Cloths”, a way of saying “look at these moments in your professional life and realize that nobody is wearing any clothes”.
A couple quotes I liked:
that’s the spirit of the creative: always carrying that soul-crushing insecurity
the more buzzwords you use, the less you have to explain your actual design thinking.
“empathy map”, “user journey maps”, we’re kinda crazy about maps, I don’t know what it is, probably because we’re lost [as an industry].
I think that websites should work without JavaScript wherever possible. Web components don't.
This is a pretty good summary of my feelings in dealing with web components. I particularly like his points about progressive enhancement. I’ve only found web components particularly useful for pieces of your UI that are intrinsically interactive or really small, discrete pieces of UI that can be progressively enhanced quite easily (like Github’s time-elements).
I thought this was an interesting set of musings about the liberating feeling that comes with a true “personal” computer—a computer that you can do what you want, when you want, how you want—and how that freedom has eroded over time. I think it’s another side of the thoughts I wrote a couple months back about software product interface design. It’s the rationale behind why I can’t move to an iPad as my primary computing device.
Maybe because I lived through this — maybe because I’m a certain age — I believe that that freedom to use my computer exactly how I want to, to make it do any crazy thing I can think of — is the thing about computers.
That’s not the thing about iOS devices. They’re great for a whole bunch of other reasons: convenience, mobility, ease-of-use.
You can do some surface-level automation, but you can’t dig deep and cobble together stuff — crossing all kinds of boundaries — with some scripts the way you can on a Mac. They’re just not made for that. And that’s fine — it’s a whole different thing.
Later:
With every tightened screw we have less power than we had. And doing the things — unsanctioned, unplanned-for, often unwieldy and even unwise — that computers are so wonderful for becomes ever-harder...But if we don’t have this power that is ours, then I don’t actually care about computers at all. It meant everything.
A single number bump replaces a mountain of marketing
Dave muses on the versioning numbers behind HTML, CSS, and JavaScript:
In JavaScript, there’s a never-ending stream of libraries, frameworks, polyfills, importers, bundlers, alterna-script languages, and performance problems to write about. It’s sure to dominate the daily programming news cycle. HTML and CSS don’t really have that position and luxury any more. In many ways, the switch to a “Living Standard” have made them dead languages, or at least mostly-dead. New language features don’t show up like they used to, or at least I don’t see the release notes anymore.
I’m on a bit of a quest to understand why these three technologies built to work together are so unequally yoked in popularity and their communities polarized. One end of the spectrum experiences a boom while the other experiences a bust. The rising tide does not lift all boats.
An interesting look at how the UK government tried to educate their citizens about computers in the 70’s, and how their approach back then compares to the way we “teach computers” now-a-days.
I really liked the author’s points. Especially the idea of teaching general computing principles, not what code to write to make a computer do something, but how and why the computer requires you to write code to run programs (emphasis mine):
“Learn to code” is Codecademy’s tagline. I don’t think I’m the first person to point this out—in fact, I probably read this somewhere and I’m now ripping it off—but there’s something revealing about using the word “code” instead of “program.” It suggests that the important thing you are learning is how to decode the code, how to look at a screen’s worth of Python and not have your eyes glaze over. I can understand why to the average person this seems like the main hurdle to becoming a professional programmer. Professional programmers spend all day looking at computer monitors covered in gobbledygook, so, if I want to become a professional programmer, I better make sure I can decipher the gobbledygook. But dealing with syntax is not the most challenging part of being a programmer, and it quickly becomes almost irrelevant in the face of much bigger obstacles. Also, armed only with knowledge of a programming language’s syntax, you may be able to read code but you won’t be able to write code to solve a novel problem.
As I’ve written before, I suspect learning about computing at a time when computers were relatively simple was a huge advantage. But perhaps another advantage these people had is shows like The Computer Programme, which strove to teach not just programming but also how and why computers can run programs at all. After watching The Computer Programme, you may not understand all the gobbledygook on a computer screen, but you don’t really need to because you know that, whatever the “code” looks like, the computer is always doing the same basic thing. After a course or two on Codecademy, you understand some flavors of gobbledygook, but to you a computer is just a magical machine that somehow turns gobbledygook into running software. That isn’t computer literacy.
I’m banging the drum for “general principles” loudly now, so let me just explain what I think they are and why they are important. There’s a book by J. Clark Scott about computers called “But How Do It Know?” The title comes from the anecdote that opens the book. A salesman is explaining to a group of people that a thermos can keep hot food hot and cold food cold. A member of the audience, astounded by this new invention, asks, “But how do it know?” The joke of course is that the thermos is not perceiving the temperature of the food and then making a decision—the thermos is just constructed so that cold food inevitably stays cold and hot food inevitably stays hot. People anthropomorphize computers in the same way, believing that computers are digital brains that somehow “choose” to do one thing or another based on the code they are fed. But learning a few things about how computers work, even at a rudimentary level, takes the homunculus out of the machine.
An novel take on CSS selectors as if statements and for loops:
menu a {
color:red;
}
menu a:first-child {
color: blue;
}
menu a:not(#current) {
color: red;
}
Now do that imperatively in JS:
for (all menus) {
for (all links in this menu) {
let first = [figure out if this is the first link in the menu]
if (first) {
link.color = 'blue'
} else if (link.id !== 'current') {
link.color = 'red';
}
}
}
The drawback of the JavaScript version is that it’s more verbose than the CSS version, and hence more prone to error. The advantage is that it offers much finer control than CSS. In CSS, we’ve just about reached the limits of what we can express. We could add a lot more logic to the JavaScript version, if we wish.
In CSS, we tell the browser how links should look. In JavaScript, we describe the algorithm for figuring out how an individual link should look.
At the end of February, WebAIM published an accessibility analysis of the top one million home pages. The results are, in a word, abysmal...And we failed. I say we quite deliberately. This is on us: on you, and on me. And, look, I realize it may sting to read that.
But this piece isn’t just a criticism. I like Ethan’s resolution towards building a more-accessible web. It’s a practice I think I could incorporate into anything I want to learn.
The only way this work gets done is if we start small, and if we work together. Instead of focusing on “accessibility” writ large...aim to do one thing this week to broaden your understanding of how people use the web, and adapt your design or development practice to incorporate what you’ve learned.
Or at least, that’s what I’m going to do. Maybe you’d like to join me.
I’ve actually never really taken the time to try and understand exactly the difference between preload, prefetch, preconnect, prerender, etc. This articular sums it up nicely. In fact, I’m going to sum it based on my understanding of how they summed it up. Is that enough summing for you?
<link rel="preload">
– use it when you want to “preload” a resource after initial page load.<link rel="prefetch">
– use it when you want to “prefetch” a resource you think you’ll need on a subsequent page.<link rel="preconnect">
– use it when you want to “preconnect” to a domain for resource(s) you know you’ll need soon.<link rel="dns-prefetch">
– similar to “preconnect”, but less featured. However it does support older browsers that “preconnect” does not.<link rel="prerender">
– use it when you want to load and render a page in the background that you anticipate the user will soon navigate to.
There’s a lot more useful and nuanced information in the article beyond what I’ve summarized here. Check it out if you don’t already know the differences.
I actually always wondered why the file scheme had three slashes in it. Now I know—and it makes perfect sense.
a file scheme has 3 slashes (compared to the two used after http) because the scheme for URLs is <proto>://<host>/<path>
and since file (in most cases) has no host (it's your machine), it becomes file:///<path>
(ref).
Really, I just loved this line. I wish more articles I read started with this premise:
There won’t be much to learn. In fact, we’ll spend most of our time unlearning.
Great design can’t ship without great relationships. Be pleasant to work with! Design is the minimum bar, relationships are the highest bar.
Visual hierarchy is...the underpinning of all visual communication. Without it design has no value. “I don’t paint things. I only paint the difference between things.” – Henri Matisse
Problem definition becomes clearer as we begin solving the problem, refine the problem further, solve the problem further, repeat. The process is circular, not linear.
Some good points in there.
An interesting article detailing the evolution of different application architectures over the years. Though this is specific to SoundCloud’s evolving architecture, it does seem to follow the path trodden by the industry at large.
I found the BFF pattern proposed in the article quite interesting, as it was not a pattern I’d seen before. It does make a lot of sense though. As services became more generic over the years in order to please their consumers, we ended up with clients that had to make possibly hundreds of API calls just to draw a single UI. The idea of having each client maintain its own “server” which reaches out to various services for its own specific needs is really interesting. Granted, GraphQL can do this for you in a sense, but trying to create a GraphQL endpoint that can appease the needs of a variety of clients can end you up in the same dilemma. However, if you spun up multiple “BFF” GraphQL servers, each one specific to its clients needs, then things get interesting!
On a more technical problem, our public APIs almost by definition are very generic. To empower third-party developers to build interesting integrations, you need to design an API that makes no assumptions about how the data is going to be used. This results in very fine-grained endpoints, which then require a large number of HTTP requests to multiple different endpoints to render even the simplest experiences...The idea was that having the team working on the client own the API would allow for them to move much quicker as it required no coordination between parts
I’ve known Shawn for a little while online, but just recently met him in person. We got to talking about a variety of things and he told me about this short little piece of writing he’d done sometime past. So I looked it up and read it. It’s good. I like the metaphor that comes to mind of “creating learning exhaust”. I think that makes writing feel more feasible and accessible. What you produce doesn’t have to be Hemingway; rather it’s often just going to be the byproduct of your learning.
You already know that you will never be done learning. But most people "learn in private", and lurk. They consume content without creating any themselves…What you do here is to have a habit of creating learning exhaust. Write blogs and tutorials and cheatsheets. Speak at meetups and conferences. Ask and answer things on Stackoverflow or Reddit. (Avoid the walled gardens like Slack and Discourse, they're not public). Make Youtube videos or Twitch streams. Start a newsletter. Draw cartoons (people loooove cartoons!). Whatever your thing is, make the thing you wish you had found when you were learning…just talk to yourself from 3 months ago.
An interesting and fresh perspective on digital design. No matter what aesthetics you put into your app, that’s never what people talk about. They don’t talk about what it looks like, that’s what designers talk about. They talk about what they can do with it.
I’ve been feeling this more and more. Quite often I’d honestly prefer system-native controls instead of custom styled or custom designed up controls. They’re boring, but they’re familiar and usable and dependable. And boring.
But big or small, I beg you, stay boring. Because true delight will always live outside your product. As Chris Kiess notes, “I’ve spent a lot of time in the field on various projects and it is rare I find a user who comments on some aspect of a feature I had discussed ad nauseam with my team.”
Endless debates about indentations, rounded corners, and colour choices are UX’s version of the sunk cost fallacy. Nothing digital design can offer compares to the experiential joy of an Airbnb host in Dublin recommending the perfect nearby bar. Or a Chicago Lyft driver giving you a dozen amazing food and drink suggestions. Or cycling confidently through Portland at 11pm thanks to turn-by-turn instructions on a Pebble watch.
I can’t believe I’m linking to HubSpot, but let truth come from whence it may.
I’ve long been a fan of plain text emails (really plain text anything). And now I have some serious data to back up my gut feeling.
Aside from proper list segmentation, nothing boosts opens and clicks as well as an old school, plain-text email.
What’s really interesting about this data is that people say they prefer HTML emails and visuals, but the data shows the opposite of what people say:
In a 2014 survey, we asked over a thousand professionals whether they preferred HTML-based or text-based emails, and whether they preferred emails that consisted of mostly text or mostly images. Nearly 2/3 of the respondents said they preferred HTML and image-based emails...[However] In every single A/B test...The emails with fewer HTML elements won with statistical significance.
The authors of the article I think get to the root of what I’ve always felt about email: it’s a 1-to-1 interaction:
For example, shouldn't an email with an image of the ebook being promoted do better than an email with no visualization of the offer? Wouldn't just a plain email be boring, and not help explain the offer? Aren't humans wired to be attracted to beautiful design?
Unfortunately, this principle doesn't apply to email.
And the reason is simple: Email, unlike other marketing channels, is perceived as a 1-to-1 interaction.
Think about how you email colleagues and friends: Do you usually add images or use well-designed templates? Probably not, and neither does your audience. They're used to using email to communicate in a personal way, so emails from companies that look more personal will resonate more.
Again the data backing these claims up is quite significant:
For the click rate, we dove into data from over half a billion marketing emails sent from HubSpot customers. These customers vary in type of business, and have different segments, list sizes, and list compositions.
What we found was that even a single image reduced the click rate
That plain text performs better than HTML emails is no small claim, especially from a marketing company like HubSpot. The cynical part of me doubts that much will come of it, though. As the author of the article states:
Ultimately, in email, less is more.
This can be a tough pill for marketers to swallow (myself included)...But data repeatedly shows plain-text email wins, so it's up to us to decide whether or not we want to make the switch.
At least now I’ve got some good data to backup my gut.
An interesting look at a new phishing method:
There seems to be a trade-off, between maximizing screen space on one hand, and retaining trusted screen space on the other. One compromise could be to retain a small amount of screen space to signal that “the URL bar is currently hidden”, instead of giving up literally all screen space to the web page.
Safari on mobile has an interesting approach in that the url bar shrinks on scroll but the domain always stays visible in the UI. I like that.
Some browser makers seems to be more and more trying to get rid of the URL, both from and engineering and a UI/UX perspective. Personally I think we should be doing the opposite. We should double down on the URL of a website and make sure it’s treated as a cornerstone of browser UI.
You know, it wasn't that long ago. There was RSS. There were blogs...Now? [Social media sites] control what gets amplified and what gets monetized. A few conference rooms in Silicon Valley dictate our online culture.
What I actually really loved about this site and found rather witty and novel was how it appeared when linked to in my twitter feed.
I found the anarchist, "freedom fighter" approach to this site’s open graph metadata rather novel and amusing.
Phil once heard someone say, “I wouldn’t use a minimum viable parachute” to which he responds “I would if I was in a situation where I needed a parachute”. His point being:
MVP is not choosing a weak product over a good product. MVP is choosing to have something now and something better later.
Jeremy discusses some of his strategy around presenting code when your audience is more than just developers (or even code beginners):
logic is more important than code. In other words, figuring out what you’re trying to accomplish (and describing it clearly) is more important than typing curly braces and semi-colons. Programming is an act of translation. Before you can translate something, you need to be able to articulate it clearly in your own language first.
I think this is an excellent strategy for making code less overwhelming to people who would otherwise be unfamiliar with it.
I click Buzzfeed links and Verge links and Awl links and Polygon links for the same reason anybody does: there's a hole in my heart, and I hope 300-400 words of web content will fill it
A couple more assessments:
wouldn't it be nice to live in a world where I never have to read something that was written for the sole purpose of traffic and revenue?
More than half the time when I'm at Buzzfeed and The Verge (I keep using Buzzfeed and The Verge as examples because I visit them a lot apparently), I get the distinct feeling that this publication has “no special interest in publishing beyond value extraction through advertising”. And if that’s the case, then it’s really important that I, as a human being with presumably better things to do, should avoid publications that make me feel this way.
online publications seem to have coalesced around the worst elements: huge ads, disposable content, auto-play videos, Like and Tweet buttons which follow you around the internet, hidden embedded pixels that try and guess your eye color so they can sell you shampoo more effectively.
Towards the end:
The reason there’s no solid revenue alternative to advertising for most of these websites is that most of what they put out is junkfood clickbait designed to increase revenue through ads. They can’t monetize it because it’s worthless. Is that ironic?
Kind of a cool/fun way of authoring content and controlling the style of that content by way of the emoji embedded in the content. Essentially, if he embeds a select emoji in his post, he has an equivalent photographical expression of his own face as an avatar for the post. Cool idea.
You don’t have to listen to the whole thing, but I thought this observation by Paul Ford (about 26 minutes in) was really great. It’s something that never really gets talked about. I feel like working in software is always talked about as this dreamy, change-the-world endeavor. But the reality is just getting something out the door that people will actually want to use can be a monumental effort. If you can do that, if you can ship something that’s good enough for people to want to use it, that’s pretty damn good. You should give yourself a pat on the back.
the fundamental problem that most people are facing is not, “how do I apply technology X to get, you know, incredible yields?” That’s a very startup-y, West Coast kind of problem. The problem most people have is: can I get a good enough piece of software shipped that people want to use? That’s it — that is it, and that is...still the fundamentally hardest thing that most people can pull off...And especially at an organizational level. If you’re in a big org, just trying to get good software out the door [is incredibly hard]. (emphasis mine)
day after day, year after year, people go to the whiteboard, use a faint marker, and then just leave that marker for the next person. After all, they think, it still has a little ink left. Maybe someone likes faint marker lines. Maybe someone will come along at night and refill it. Or it might naturally grow new ink. Really, who can say?
I think there’s a little gap of knowledge in all of us around how whiteboard markers work – which is why, when we pick one up and use it only to find its output faint and unreadable, we put the cap back on. “It’s probably got something left in it, I just don’t know how to coax it out. I’m sure someone else smarter than me will know how.”
This piece reminded me of someone I worked with who, whenever they found a used marker, would always put the cap back on and dramatically chuck it across the conference room towards the trash. It was beautiful thing.
Some of my favorites:
“Much of the essence of building a program is in fact the debugging of the specification.” — Fred Brooks
“A common fallacy is to assume authors of incomprehensible code will be able to express themselves clearly in comments.” — Kevlin Henney
“Sufficiently advanced abstractions are indistinguishable from obfuscation.” — @raganwald
Then later, a few favorites in the comments:
“If at first you don’t succeed, call it version 1.0.” — Unknown
“It doesn’t make sense to hire smart people and tell them what to do; we hire smart people so they can tell us what to do.” — Steve Jobs
“The most important thing in communication is to hear what isn’t being said.” — Peter Drucker
“Management is doing things right; leadership is doing the right things.“ — Peter Drucker
“Measuring programming progress by lines of code is like measuring aircraft building progress by weight.” — Bill Gates
“No one’s life has yet been simplified by a computer.” — Ted Nelson
“If you think good architecture is expensive, try bad architecture.” — Brian Foote
“The bearing of a child takes nine months, no matter how many women are assigned.” — Frederick P. Brooks Jr.
“To err is human but to really foul up requires a computer.” — Dan Rather
“The business of software building isn’t really high-tech at all. It’s most of all a business of talking to each other and writing things down. Those who were making major contributions to the field were more likely to be its best communicators than its best technicians.” — Tom DeMarco
You can’t claim time on anyone else’s calendar, either. Other people’s time isn’t for you — it’s for them. You can’t take it, chip away at it, or block it off. Everyone’s in control of their time. They can give it to you, but you can’t take it from them.
The video was especially interesting.
A neat look at varying URL designs. The author touches on the idea of designing URLs so that users can construct a URL without having to actually use your site. Additionally, as users become more familiar a site’s URL patterns, you’ll find it’s often quicker to edit the URL rather than use the GUI to navigate the application or website.
I’ve been thinking about URL design more and more lately. In fact, on my icon gallery sites I designed the URLs of each icon “resource” to act as my API for accessing the icon’s artwork:
I could leverage my site’s URLs as an interface for getting icons, i.e. /icons/:id/
for the HTML representation of the icon and /icons/:id/:size.png
for the actual icon (i.e. /icons/facebook/512.png
would give me facebook’s 512px app icon).
A great, visually-instructive talk about the event loop and how your JavaScript actually gets executed by the browser. He has some great examples of gotchas that are worth wrapping your head around. Like this one: if the user clicks the button, in what order are things logged?
button.addEventListener('click', () => {
Promise.resolve().then(() => console.log('p1'));
console.log ('1');
});
button.addEventListener('click', () => {
Promise.resolve().then(() => console.log('p2'));
console.log('2');
}):
His descriptions of tasks, animation callbacks, and micro tasks, from the perspective of the browser, were all eye opening. Great talk for anyone doing JavaScript.
An interesting take on explaining CSS to “JavaScripts” through the lens of JSON:
Like JSON files, CSS files are not programs. but a series of declarations that programs use in order to create output. Also, they fail silently when they contain instructions that the receiving program does not understand.
If you approach CSS as you approach JSON you’ve taken a step toward understanding it.
Nicholas Carr, in reviewing a new book, is at it again: writing counterpoints to the Silicon Valley gospel.
Zuboff’s fierce indictment of the big internet firms goes beyond the usual condemnations of privacy violations and monopolistic practices. To her, such criticisms are sideshows, distractions that blind us to a graver danger: By reengineering the economy and society to their own benefit, Google and Facebook are perverting capitalism in a way that undermines personal freedom and corrodes democracy.
Later:
Whenever we use free apps and online services, it’s often said, we become the products, our attention harvested and sold to advertisers. But, as Zuboff makes clear, this truism gets it wrong. Surveillance capitalism’s real products, vaporous but immensely valuable, are predictions about our future behavior — what we’ll look at, where we’ll go, what we’ll buy, what opinions we’ll hold — that internet companies derive from our personal data and sell to businesses, political operatives, and other bidders.
I like how transparent and intentional the react team is with what they do. When there are significant or strategic changes, they don’t just say “here’s a new release” and plop something onto npm. Their corresponding release blog posts explain what they did and why they did it.
This particular post is a great example of their thoroughness. They explain their theoretical position on semver and why they intentionally release the way they do. I love it.
Patches are the most important type of release because they sometimes contain critical bug fixes. That means patch releases have a higher bar for reliability. It’s unacceptable for a patch to introduce additional bugs, because if people come to distrust patches, it compromises our ability to fix critical bugs when they arise — for example, to fix a security vulnerability
They also touched on their regret for how they versioned their recent releases: they conflated their software release version (“semver”) with their marketing release version (“hooks”) (and it came back to bite them in the butt). This is something we’d all do well to remember:
At React Conf, we said that 16.7 would be the first release to include Hooks. This was a mistake. We shouldn’t have attached a specific version number to an unreleased feature. We’ll avoid this in the future.
Good principles to remember.
What’s a crucial aspect of designing APIs?
The best API designers I know don’t stop at the “first order” aspects like readability. They dedicate just as much, if not more, effort to what I call the “second order” API design: how code using this API would evolve over time.
A slight change in requirements can make the most elegant code fall apart...[great APIs] are optimized for change.
One thing that writing elegant software has in common with art: its crafters should remain cognizant of the overall macro vision of the project, at the same time they are working on its smallest micro details. JIRA, alas, implicitly teaches everyone to ignore the larger vision while focusing on details. There is no whole...JIRA encourages the disintegration of the macro vision.
A scathing assessment of how Jira is commonly used. Personally, I like the author’s conclusion: Jira can be great for issue tracking, but for anything larger it works against you. I also like the suggestion of prose as a description of the project. If we all had to write out – in natural language – what we were doing in software, I think we’d discover a lot more holes in our thinking which we’d then be forced to patch up. Jira makes it easy for you to bypass all of that and just write simple, vague depictions of what you’re trying to do.
A fascinating look at technology’s influence on doctors (based on years of experience by a well-renowned doctor).
First, there’s the realization that some of the constraints prior to digitalization were actually beneficial:
piecing together what’s important about the patient’s history is at times actually harder than when [we] had to leaf through a sheaf of paper records. Doctors’ handwritten notes were brief and to the point. With computers, however, the shortcut is to paste in whole blocks of information—an entire two-page imaging report, say—rather than selecting the relevant details. The next doctor must hunt through several pages to find what really matters.
That’s when you start to realize that technology has its benefits, but you likely traded one set of problems for another. For doctors, apparently technology is becoming so overbearing that we’re hiring for jobs which didn’t exist to handle the computerization of everything:
We replaced paper with computers because paper was inefficient. Now computers have become inefficient, so we’re hiring more humans [to complete computer-related tasks].
Which results in us humans acting like robots in order to fulfill the requirements of the systems we built:
Many fear that the advance of technology will replace us all with robots. Yet in fields like health care the more imminent prospect is that it will make us all behave like robots”
The author’s solution?
We can retune and streamline our systems, but we won’t find a magical sweet spot between competing imperatives. We can only insure that people always have the ability to turn away from their screens and see each other, colleague to colleague, clinician to patient, face to face.
JSX-like syntax in plain JavaScript - no transpiler necessary.
A tool that essentially lets you do JSX but in the browser (no transpiling). This is really awesome when you want to leverage native es modules for react in the browser and not transpile.
Was just doing something similar and feel the same way. When building a prototype, you throw so many best practices to the wind:
Whereas I would think long and hard about the performance impacts of third-party libraries and frameworks on a public project, I won’t give it a second thought when it comes to a prototype. Throw all the JavaScript frameworks and CSS libraries you want at it (although I would argue that in-browser technologies like CSS Grid have made CSS libraries like Bootstrap less necessary, even for prototyping).
Remember, however, that prototypes quite often gain their utility through their ability to be like a piece of paper: you sketch out your ideas quickly with low friction, you learn what you don’t want, then you throw it away.
Build prototypes to test ideas, designs, interactions, and interfaces…and then throw the code away. The value of a prototype is in answering questions and testing hypotheses. Don’t fall for the sunk cost fallacy when it’s time to switch over into production mode.
Not a talk about software, but has some good insights into what makes a great leader:
The conductor of an orchestra doesn’t make a sound. He depends for his power on the ability to make other people powerful...I realized my job is to awaken possibility in other people...
He says the way you can tell if you’re awakening possibility in other people is by looking into their eyes. And if someone’s eyes aren’t reflecting that awakened possibility, then you have to ask:
Who am I being that my player’s eyes are not shining?
And that can extend to anyone in your sphere of influence: “who am I being that my [children’s / employees’ / friends’] eyes are not shining?”
With every passing day that I work in technology, I find this quote more and more relevant:
Replace "can you build this?" with "can you maintain this without losing your minds?"
An interesting, and short, look at problem areas of software development. This line has been lingering in my head for a few days:
Perhaps we should expect true advances in software “engineering” only when we learn how better to govern ourselves.
This sounds like a future we could very possibly live in:
In Verner Vinge’s space opera A Deepness in the Sky, he proposes that one of this future’s most valuable professions is that of Programmer-Archaeologist. Essentially, the layers of accreted software in all large systems are so deep, inter-penetrating, idiosyncratic and inter-dependent that it has become impossible to just re-write them for simplicity’s sake – they genuinely can’t be replaced without wrecking the foundations of civilization. The Programmer-Archaeologist churns through this maddening nest of ancient languages and hidden/forgotten tools to repair existing programs or to find odd things that can be turned to unanticipated uses.
It’s not a bug, it’s a feature is an acknowledgment, half comic, half tragic, of the ambiguity that has always haunted computer programming.
In the popular imagination, apps and other programs are “algorithms,” sequences of clear-cut instructions that march forward with the precision of a drill sergeant. But while software may be logical, it’s rarely pristine. A program is a social artifact. It emerges through negotiation and compromise, a product of subjective judgments and shifting assumptions. As soon as it gets into the hands of users, a whole new set of expectations comes into play. What seems an irritating defect to a particular user—a hair-trigger toggle between landscape and portrait mode, say—may, in the eyes of the programmer, be a specification expertly executed.
Shortly after reading this article, I found this lovely t-shirt.
An interesting opinion piece on how “boring” technology can be a pretty safe bet:
The nice thing about boringness (so constrained) is that the capabilities of these things are well understood. But more importantly, their failure modes are well understood.
New technology has a much larger magnitude of failure modes that are unknown. We all know this. Searching for a way to fix something (which is huge part of you’re job as a developer) that’s been around 10 years is much easier than searching for a to fix something that’s been around 10 days.
It can be amazing how far a small set of technology choices can go...If you think you can't accomplish your goals with what you've got now, you are probably just not thinking creatively enough.
That seems to be the embodiment of JavaScript.
I thought this was a really great presentation around how to be effective building software.
If you have a rockstar and everyone on the team is deferring to the rockstar, you have fewer people on your team taking initiative. If you have a team of 10 people and 9 of them, when you ask a question, just turn to look at the senior dev to see what their solution is, you’ve just lost 9 brains worth of thinking power.
You have to ask yourself:
What are the underlying problems that created the need for a rockstar to come in and fix everything?
He makes a point about how code reviews get a bad rap because a lot of teams only conduct code reviews when something is wrong:
Code reviews are a chance for the lead developer to flog someone in the public square because they did something that, I don’t know, was a memory hog. That is not what a code review should be. I think that code reviews should mostly be when someone does something that you like. Pull it up in front of the entire team and walk through what they did right. Then talk about all the other ways it could’ve been written that wouldn’t have been optimal. Show what the anti-pattern could’ve been, and praise what was done.
[As a senior developer] You should be constantly failing in front of your team then showing them how you learn from your mistakes, because that’s how you got where you are — that’s how you became a senior developer.
A really good point on thinking about longevity in the things you build:
Use stable open source tools if that option exists because if you build something in house you are now saying “this tool, in addition to our product, is something that we need to maintain and staff.”
Write code that’s small and easy to delete...when you optimize for deletion, you don’t have to write code that’s valid five years in the future...[google scale] you should be building features for 500 rather than optimizing for 5 million...weight the tradeoffs and choose the thing that will make your team more productive, not the thing that will make your app best in ten years.
I’d never read this before. It‘s a document from the React team stating, essentially, the philosophical underpinnings of why the build software they way they do.
One of the things I found interesting was their perspective on setState()
and why it’s asynchronous. After all the things that’ve been said about setState()
their articulation about how they think about it is probably the most helpful I’ve heard and what we should teach to beginners: that it’s not so much about “setting state” as it is “scheduling an update”. of what setState()
a
There is an internal joke in the team that React should have been called “Schedule” because React does not want to be fully “reactive”.
One of the real powers I think behind React is the ability you have as a developer to trace as state of the UI to the data that produced it. This explicit design goal really does “turn debugging from guesswork into a boring but finite procedure”:
If you see something wrong on the screen, you can open React DevTools, find the component responsible for rendering, and then see if the props and state are correct. If they are, you know that the problem is in the component’s render() function, or some function that is called by render(). The problem is isolated.
If the state is wrong, you know that the problem is caused by one of the setState() calls in this file. This, too, is relatively simple to locate and fix because usually there are only a few setState() calls in a single file.
If the props are wrong, you can traverse the tree up in the inspector, looking for the component that first “poisoned the well” by passing bad props down.
This ability to trace any UI to the data that produced it in the form of current props and state is very important to React. It is an explicit design goal that state is not “trapped” in closures and combinators, and is available to React directly.
While the UI is dynamic, we believe that synchronous render() functions of props and state turn debugging from guesswork into a boring but finite procedure.
I also liked the section where they talked about their own internal style of developing the React codebase and how practicality generally trumps elegance:
Verbose code that is easy to move around, change and remove is preferred to elegant code that is prematurely abstracted and hard to change.
An thoughtful writeup behind how Jeremy prepares for his conference talks. I like this part about how even a plain text file, which seems open-ended, still enforces a certain kind of linear constraint, whereas a blank sheet of paper and a pencil is truly more open-ended:
I used to do this mind-mapping step by opening a text file and dumping my thoughts into it. I told myself that they were in no particular order, but because a text file reads left to right and top to bottom, they are in an order, whether I intended it or not. By using a big sheet of paper, I can genuinely get things down in a disconnected way (and later, I can literally start drawing connections).
About the first twenty minutes of this talk (before he gets into the Brittian-specific stuff) is absolutely fantastic.
Education should prepare young people for jobs that do no yet exist, using technologies that have not yet been invented, to solve problem of which we are not yet aware.
His main point is that we (in computers) often put too much focus on technology and not enough on ideas. He showed this really cool video about how you could illustrate sorting algorithms to kids without using any technology. His point is that we need more that encourages inquisitiveness and imagination.
He also makes good arguments about why we should teach “computer science” (again, ideas not technology) to kids. Technologies come and go, but the underlying ideas persist:
Why do we teach biology to kids? Do we expect every kid to become a biologist? No. It’s about coming to understand the world you live in and how you can navigate it, how to take control of events in your life and not just be at their mercy.
Education shouldn’t be about teaching a skill and splitting someone out at the end who is armed with that skill. Rather we teach skills or reason and dedication and problem solving so that when they get spit out, they can navigate the successive waves of technology that will come at them over their careers. Knowing one won’t be enough.
We love to talk about “Atomic Design” and “Pattern Libraries” but I found this to be an even more thoughtful look at why those things aren’t silver bullets and how you need an even more thoughtful, overarching design system.
A strong design is informed by a view of the big picture, an understanding of the context, and strong art direction — even at the cost of consistency or time.
Design doesn’t emerge by skinning or theming components; it needs a perspective and a point of view — it desperately needs creative guidance. However, I can’t help but notice that when we are building these lovely pattern libraries and design systems and style guides using fantastic tools such as Pattern Lab and living style guides, we tend to settle on a single shared view of how a pattern library should be built and how it should appear. Yet that view doesn’t necessarily result in a usable and long-lasting pattern library
The article as a whole felt lacking, but there were a few particular lines that caught my eye as relevant:
We’re afraid of being bad at [hobbies]. Or rather, we are intimidated by the expectation...that we must actually be skilled at what we do in our free time. Our “hobbies,” if that’s even the word for them anymore, have become too serious, too demanding, too much an occasion to become anxious...If you’re a jogger, it is no longer enough to cruise around the block; you’re training for the next marathon. If you’re a painter, you are no longer passing a pleasant afternoon, just you, your watercolors and your water lilies; you are trying to land a gallery show or at least garner a respectable social media following. When your identity is linked to your hobby...you’d better be good at it, or else who are you?
Then later:
The demands of excellence are at war with what we call freedom. For to permit yourself to do only that which you are good at is to be trapped in a cage
This probably stuck out to me because of my post “The Art of the Side Project” I wrote back at the beginning of 2017. Still seems relevant.
Interview with Jason Lengstorf who is a developer advocate for Gatsby. I liked this bit which referrs to graphql but it’s how I feel about most technology I use. It’s my adoption cycle:
Learn about GraphQL
Dismiss it as a fad
Keep hearing about it
Try it out
Hate it
Keep trying
Things click
Never willingly use anything but GraphQL ever again
An interesting look at what happened to RSS. What I found interesting was the author’s “moral of the story” about how hard building consensus is (whether it’s in open source software, or even just in a business):
But the more mundane reason [why RSS failed] is that centralized silos are just easier to design than common standards. Consensus is difficult to achieve and it takes time, but without consensus spurned developers will go off and create competing standards. The lesson here may be that if we want to see a better, more open web, we have to get better at not screwing each other over.
This just seems so true, probably because I subscribe to “people can come up with statistics to prove anything”:
If you have enough data you can prove anything. Which is to say that with enough data everything is true, so nothing is. All great insights I’ve ever seen have come from n=1.
Following on the heels the previous tweet, there’s this piece from the ever insightful folks over at ia.net. Here are a few of the pieces that stood out to me.
Not the master designer but the user is the arbitrator of good design.
The world was sucked into a medium that allowed measuring the performance of forty-one shades of blue. And thus the notion of good and bad design radically changed. Design used to be about sensitivity, beauty, and taste. Today, design is about what engages users and grows profits.
The key performance indicator for design has changed from beauty to profit. Measuring design has transformed a handicraft into an engineering job. The user is king. The user decides what is good and what is bad design
We are also beginning to realize eliminating what is not measurable may come at an unmeasurable cost.
How much of what us human is truly measurable and verifiable?
How do we measure friendship? By the number of replies per month? By the length of replies? With computer linguistics? How do we measure usefulness? Lots of page views? Few page views? Stickiness? Number of Subscriptions? How do we measure trust? By the number of likes? Retweets? Comments? How do we measure truth?
However, out of experience, we know those good things are rare, that quality always comes at a price and that the price tag of quality grows exponentially.
We also know that what is truly good is somehow beautiful, and what is truly beautiful is somehow good. It’s not a direct relationship, it’s a deeper connection.
My comment: Silicon Valley’s law: all software problems will be resolved with more software
An interesting little story about how JSON rose to its prominence today. It’s probably an illustration of the rule of least power (“choose the least powerful language suitable for a given purpose”). In fact, the article’s author states as much:
my own hunch is that people disliked XML because it was confusing, and it was confusing because it seemed to come in so many different flavors.
The author goes on to say:
XML was designed to be a system that could be used by everyone for almost anything imaginable. To that end, XML is actually a meta-language that allows you to define domain-specific languages for individual applications...And yet here was JSON, a format offering no benefits over XML except those enabled by throwing out the cruft that made XML so flexible.
The simplicity of JSON, which I’m sure is often ridiculed, is quite fascinating in comparison to XML:
The first lines of a typical XML document establish the XML version and then the particular sub-language the XML document should conform to. That is a lot of variation to account for already, especially when compared to JSON, which is so straightforward that no new version of the JSON specification is ever expected to be written.
There were a few little historical tidbits I found interesting in this story. For example, when Douglas Crockford first implemented what would become “JavaScript object notation” by embedding a <script>
tag in HTML, he ran into a problem where dynamically written keys could conflict with reserved words in JavaScript, so he just required all key names to be quoted. JSON requires quoted key names to this day.
There’s also the story about the name and the spec:
Crockford and Morningstar...wanted to name their format “JSML”, for JavaScript Markup Language, but found that the acronym was already being used for something called Java Speech Markup Language. So they decided to go with “JavaScript Object Notation”, or JSON. They began pitching it to clients but soon found that clients were unwilling to take a chance on an unknown technology that lacked an official specification. So Crockford decided he would write one.
On, and there was so linked reading in the article, some of which I followed. I liked this comment on XML, which put into words my feelings based on experience with XML:
I spend a disproportionate amount of my time wading through an endless sea of angle brackets and verbose tags desperately searching for the vaguest hint of actual information
The path to success is through not trying to succeed. To achieve our highest goals, we must be willing to abandon them.
In a lot of ways, that’s the premise of this talk. And I, for one, thought his points resonated a lot with my own experiences of creativity. There’s quite a few paradoxical findings here.
One outcome was to realise that there’s a tendency (in performance, accessibility, or SEO) to focus on what’s easily measureable, not because it’s necessarily what matters, but precisely because it is easy to measure.
Too true.
I think incremental and iterative improvements can be served well by measurements. But vast, innovative improvements and directional changes in product need more than analytics. They need vision and taste from humans.
Here’s a thought? Things that are measurable are like the micro of products, whereas vision and taste are the macro.
Once again, an interesting opinion from Nicholas Carr into our current political state and its relationship to modern technology.
Twitter’s formal qualities as an informational medium—its immediacy and ephemerality, its vast reach, its lack of filters—mirror and reinforce the impulsiveness, solipsism, and grandiosity that define Trump’s personality and presidency and, by extension, the times. Banal yet mesmerizing, the president’s Twitter stream distills our strange cultural moment—the moment the noise became the signal.
Gambling and social media:
A similarly seductive dynamic [to gambling] plays out on the screens of social media apps. Because tweets and other posts also offer unpredictable rewards—some messages go viral, others fall flat—they exert the same kind of pull on the mind. “You never know if your next post will be the one that delivers a jackpot.”
And how that relates to Trump:
Trump’s tweets don’t just amass thousands of likes and retweets. They appear, sometimes within minutes of being posted, in high-definition blowups on "Fox & Friends" and "Morning Joe" and "Good Morning America." They’re read, verbatim, by TV and radio anchors. They’re embedded in stories in newspapers and on news sites, complete with Trump’s brooding profile picture. They’re praised, attacked and parsed by Washington’s myriad talking heads. When Trump tweets—often while literally watching the TV network that will cover the tweet—the jackpot of attention is almost guaranteed. Trump, by all accounts, spends an inordinate amount of time monitoring the media, the outsized coverage becomes all the more magnified in his mind. And as the signals flow back to him from the press, he is able to fine-tune his tweets to sustain or amplify the coverage. For Trump, in other words, tweeting isn’t just a game of chance. It’s a tool of manipulation. Twitter controls Trump, but Trump also controls Twitter—and, in turn, the national conversation.
On the nature of the medium that is Twitter:
With its emphasis on brief messages and reflexive responses, Twitter is a medium that encourages and rewards [a] reductive view of the world...it’s an invitation to shallowness.
And what that leads to:
Twitter relieves the president [and many of its users] of the pressure to be well-informed or discerning, even when he’s addressing enormously complicated issues like the North Korean nuclear threat...Twitter gives Trump [and again its users] license to sidestep rational analysis.
More acutely:
We listen so intently to Trump’s tweets because they tell us what we want to hear about the political brand we’ve chosen. In a perverse way, they serve as the rallying cries of two opposed and warring tribes...[Trump] succeeds in pulling the national conversation down to his own level—and keeping it there.
On a more philosophical level:
Thanks to the rise of networks like Twitter, Facebook and Snapchat, the way we express ourselves, as individuals and as citizens, is in a state of upheaval, an upheaval that extends from the family dinner table to the upper reaches of government. Radically biased toward space and against time, social media is inherently destabilizing. What it teaches us, through its whirlwind of fleeting messages, is that nothing lasts. Everything is disposable. Novelty rules. The sense that “nothing matters,” that wry, despairing complaint of people worried about national politics right now, isn’t just a Trump phenomenon; it’s built into the medium.
Jobs responding to a question/insult about a particular technology. I think his response is a good reminder that before any technology, you need vision and principles for what you’re doing. Those will guide you more than any technology. In fact, focusing too much on just what’s going on with the tech can often blind you to the potential of what you’re trying to do.
As we have tried to come up with a strategy and a vision for Apple, it started with “what incredible benefits can we give the customer, where can we take the customer?” not starting with “let’s sit down with the engineers and figure out what awesome technology we have and how we are going to market that?” And I think that’s the right path to take.
I think we often focus on designing or building an element, without researching the other elements it should connect to—without understanding the system it lives in.
Later:
the visual languages we formalize—they’re artifacts that ultimately live in a broader organizational context. (And in a context that’s even broader than that.) A successful design project understands that context before settling on a solution
This is just a fantastic deep dive into working with dates and time in programming.
Look at this screenshot of the console.
One of the most powerful web debugging techniques I'm aware of is adding colors to console.log. Makes it possible to spot high level patterns in an otherwise noisy stream of data.
A cool technique I didn’t know existed. There’s also a gist on how to implement.
A neat little .gif depicting the idea of downsampling in computer graphics but on a physical, real-world object.
Jeremy quoting from and commenting on the new book Flexible Typesetting from A List Apart.
It appears the book nods to the materiality of creating things for the web. Specifically, typography on the web should honor and respect the nature of its medium, which tends towards design being a suggestion, not a mandate. Here’s a quote from the book:
Of course typography is valuable. Typography may now be optional [on the web], but that doesn’t mean it’s worthless. Typographic choices contribute to a text’s meaning. But on the web, text itself (including its structural markup) matters most, and presentational instructions like typography take a back seat. Text loads first; typography comes later. Readers are free to ignore typographic suggestions, and often prefer to. Services like Instapaper, Pocket, and Safari’s Reader View are popular partly because readers like their text the way they like it
As the author states, “Readers [on the web] are typographers, too.”
First, the author gives us a preface from David Graeber detailing what he means by “bullshit”:
Huge swathes of people...spend their entire working lives performing tasks they secretly believe do not really need to be performed. The moral and spiritual damage that comes from this situation is profound. It is a scar across our collective soul. Yet virtually no one talks about it...These are what I propose to call ‘bullshit jobs’.
Then gives a good example of the kind of bullshit going on in the web: CNN claiming to have the highest number of “video starts” in their category. This is a stat that we all know doesn’t represent anything real but I’m sure goes over well in a marketing meeting:
I don’t know exactly how many of those millions of video starts were stopped instantly by either the visitor frantically pressing every button in the player until it goes away or just closing the tab in desperation, but I suspect it’s approximately every single one of them.
Or what about those “please sign up for our newsletter” emails?
[newsletter signup forms are everywhere.] Get a nominal coupon in exchange for being sent an email you won’t read every day until forever.
As a developer, you probably think “these things only exist because of marketers”. Then the author hits on a few things closer to home, which I think in certain cases are valid points:
there are a bunch of elements that have become something of a standard with modern website design that, while not offensively intrusive, are often unnecessary. I appreciate great typography, but web fonts still load pretty slowly and cause the text to reflow midway through reading the first paragraph
The article is a good look at what the web is becoming and how some of the things we think are so great, if we step back for one second, we might realize aren’t so great after all.
A reminder about how different internet access is around the world.
Eric was in rural Uganda teaching web development and trying to access the internet where his only option for connectivity was satellite internet:
For geosynchronous-satellite internet access, the speed of light become a factor in ping times: just having the signals propagate through a mixture of vacuum and atmosphere chews up approximately half a second of travel time over roughly 89,000 miles (~152,000km)... That’s just the time for the signals to make two round trips to geosynchronous orbit and back. In reality, there are the times to route the packets on either end, and the re-transmission time at the satellite itself. But that’s not the real connection killer in most cases: packet loss is. After all, these packets are going to orbit and back. Lots of things along those long and lonely signal paths can cause the packets to get dropped. 50% packet loss is not uncommon; 80% is not unexpected. So, you’re losing half your packets (or more), and the packets that aren’t lost have latency times around two-thirds of a second (or more). Each.
The web and its foundational architecture of TCP/IP is actually pretty amazing when you stop and think about it in light of Eric’s story. But anyway, his point was that to combat the problems of satellite-only connectivity, people create caching servers but those become problematic when everything is HTTPS because HTTPS is meant to stop man-in-the-middle attacks and a caching server is essentially a man-in-the-middle. Eric’s point is that “Securing the web literally made it less accessible to many, many people around the world.”
I don’t really have a solution here. I think HTTPS is probably a net positive overall, and I don’t know what we could have done better. All I know is that I saw, first-hand, the negative externality that was pushed onto people far, far away from our data centers and our thoughts.
I actually really love Netlify’s ethos about how deploys should be so mundane, routine, and predictable that you could deploy every minute if you wanted. So this project was a cool outworking of that vision:
I decided to look at what could happen when continuous deployment is so mundane, so solved, so predictable, that I could deploy with confidence every day. Every hour. Every minute.
What I love about JAM STACK
Imagine wanting to setup a cron job to scrape stack overflow once a day for support questions about your open source project. It’s hard to justify paying 7 bucks a month for a server for something like this but in the serverless pay-per-execution landscape this would likely land under the free tier or a couple of cents a month.
I’ve been trying this on a few projects with Netlify and it works like a charm. Loving it.
I totally agree with Ethan’s assessment here. People are always saying “the web is slow, here’s how to make it fast” and they solve the problem from a technology perspective. But the mainstream web isn’t primarily slow because of an ignorance in how to make it fast. It’s slow because at the core of the web’s essence (and this is something that I think just happened organically over time) people expect everything on it to be free. So money has to be made somewhere, and it gets made by the boatloads of tracking/analytics JavaScript and other bloated bandwidth that ends up on websites.
ultimately, the web’s performance problem is a problem of profitability. If we’re going to talk about bloated pages, we should do so in context: in the context of a web where digital advertising revenue is cratering for publishers, but is positively flourishing for Facebook and Google. We should look at the underlying structural issues that incentivize a company to include heavy advertising scripts and pesky overlays, or examine the market challenges that force a publisher to adopt something like AMP.
Let’s stop kidding ourselves. This is the core issue.
Stage presets are being removed in Babel v7 and Henry Zhu makes the case as to why.
Personally, I’m all for it. Stage presets always confused me. Making developers opt-in explicitly to varying levels of language experimentation will be beneficial to everyone because it will force us as a community talk more specifically about the (often quite disparate) evolutionary changes in the language.
Removing the presets would be considered a "feature" since it means someone would have to make an explicit decision to use each proposal, which is reasonable for any proposal since they all have varying levels of instability, usefulness, and complexity.
I like this. You can still get proposals grouped together if you want, but not from the “official” project. It makes developers put too much trust in grouped presets. Leave that up to third parties, which should encourage developers to better vet their sources for grouped proposals. I think this is the right choice because Babel should be low level. Other higher level frameworks or tools can make (and obfuscate) the choices of grouped proposals for people (like create-react-app
).
people will have to opt-in and know what kinds of proposals are being used instead of assuming what people should use, especially given the unstable nature of some of these proposals. If you use another preset or a toolchain, (e.g. create-react-app) it's possible this change doesn't affect you directly.
Remember too that it’s not only Babel that has to update with stage presets, but everything down river from that. That’s too much churn to support for something like Babel IMO. Unsustainable. So this again was a good choice
Once new syntax is proposed, many things need updating: parsers (babylon
), syntax highlighting (language-babel
), linters (babel-eslint
), test frameworks (jest/ava
), formatters (prettier
), code coverage (istanbul
), minifiers (babel-minify
), and more.
The maintainers seems to recognize that’s too much for Babel to be biting off. They’d be better off if their skin wasn’t in that game.
In many ways, it's surprising we even attempt to handle [stages] in Babel itself given the constant updates and churn.
The podcast itself had some interesting tidbits in it, but what I really liked was this little snippet from Andrew:
Code is temporary. Ideas persist.
Side note: this is a good question to ask yourself when coming into or architecting a project — what are the ideas underlying this code? Code can be refactored, but only within the framework of the ideas which support it (otherwise you’re looking at a significant rewrite).
I never known about this repo until hearing about it on “The React Podcast”. It’s an interesting conceptual approach to the underpinnings of react. In other words, its an expression of the ideas in React irregardless of the code that implements it.
The actual implementation of React.js is full of pragmatic solutions, incremental steps, algorithmic optimizations, legacy code, debug tooling and things you need to make it actually useful. Those things are more fleeting, can change over time if it is valuable enough and have high enough priority. The actual implementation is much more difficult to reason about.
Iconic iconist Louie Mantia on twitter:
Icons: they’re not logos.
Use elements of your brand like color, shape, weight, and style, but resist the urge to just use your logo.
This first photo was illustrative of his point, but he followed up with another tweet illustrating how different brands could use the same metaphor of a TV in designing their icon without losing brand “equity” (see the photo).
I, for one, like it. I’d love to see more icon design like this in the wild.
I’ve always enjoyed following Dan, he brings a dose of reality and empathy to a tech world often awash with exaggerated claims.
@thekitze
If we would start webdev from scratch and had to choose between:
- CSS vs css-in-js
- REST vs GraphQL
- Templates vs JSX
No sane person would choose the first options
@dan_abramov
There are three things wrong with this tweet:
- Calling people insane for technical choices is an asshole move
- This paints React community as obnoxious know-it-alls
- Tech on the right is both overkill for smaller sites (majority of the web) and still far from being “done”
@thekitze
Dan, this has nothing to do with React or frameworks.
What I'm trying to say is: just imagine if these weren't technical choices and we had to invent ways of styling, passing data & writing components.
I don't know if people are trying too hard to misunderstand the tweet.
@dan_abramov
It has to do with React because you are prominent in the React community. Whether you want it or not, people from other communities reading this will think “React developers agree with this person that I’m insane for liking e.g. CSS”.
@thekitze
Sane might have been a wrong word. Maybe "experienced".
Still, people are misunderstanding the "invent" part of the tweet. If we had to invent styling most experienced developers would choose tight coupling of styles to elements (otherwise Sass/Less/BEM/Modules wouldn't exist)
And then this — IMO an incredibly insightful, reasoned response in a technological discussion.
@dan_abramov
Again, you’re implying that the other side of the tradeoff only appeals to inexperienced people. This is super patronizing. Have you considered that maybe you lack the experience to appreciate simpler options that match the problem domain more closely?
I love that phrase: “Have you considered that maybe you lack the experience to appreciate simpler options that match the problem domain more closely?”
I love when someone conjoins just the right words in just the right order. Thanks Dan.
toolchains have replaced know-how...we must rid ourselves of the cult of the complex. Admitting the problem is the first step in solving it.
That’s how Zeldman has begun his latest tirade. Granted, the delivery is classic Zeldman, but if you wade through some of the ranting and listen for his points, I think he makes some valid ones.
Alas, many new designers and developers (and even many experienced ones) feel like they can’t launch a new project without dragging in packages from NPM...with no sure idea what the code therein is doing...For them, it’s a matter of job security and viability. There’s almost a fear that if you haven’t mastered a dozen new frameworks and tools each year (and by mastered, I mean used), you’re slipping behind into irrelevancy. HR folks who write job descriptions listing the ten thousand tool sets you’re supposed to know backwards and forwards to qualify for a junior front-end position don’t help the situation.
37Signals, makers of Basecamp and ever the buckers-of-trends, wrote this piece about why a monolith architecture (vs. the trendy micro service) is the right technological solution for them. At a more general level, they make this important observation:
The patterns that make sense for organizations orders of magnitude larger than yours, are often the exact opposite ones that’ll make sense for you. It’s the essence of cargo culting. If I dance like these behemoths, surely I too will grow into one. I’m sorry, but that’s just not how the tango goes.
This is true of not just technical patterns, but general organizational approaches too. But that you shouldn’t run HR like a 50,000-person company when you have 50 seems obvious to most though (with some exceptions)
A fourth of July soliloquy:
As China starts outdoing us economically, technically and strategically, we are turning Chinese, slowly losing the spiritual, cultural and political texture that made us different....Silicon Valley spies on us like the Chinese Government—and in many ways they see China as their role model. They admire entrepreneurs that don’t sleep, don’t see their children, don’t care about such touch-me-feel-me nonsense like the truth, justice, beauty or how others feel.
So what makes the West unique? The author suggests the following 16 items:
- That all men are by nature equally free and independent
- That all power is vested in the people
- That government is instituted for the common benefit
- That no man is entitled to exclusive privileges
- That legislative executive should be separate and distinct from the judicative;
- That elections ought to be free
- That all power without consent of the representatives of the people is injurious
- That in prosecutions a man hath a right to demand the cause and nature of his accusation
- That excessive bail ought not to be required, nor excessive fines imposed nor cruel and unusual punishments inflicted
- That general warrants are grievous and oppressive
- That the ancient trial by jury is preferable to any other
- That the freedom of the press is one of the greatest bulwarks of liberty
- That a well regulated militia is the proper defense of a free state; that standing armies, in time of peace, should be avoided as dangerous to liberty
- That the people have a right to uniform government
- That no free government can be preserved to any people but by a firm adherence to justice, moderation, temperance, frugality, and virtue
- That religion can be directed by reason and conviction, not by force or violence
And what's so special about these? They are ideas whose impact cannot be directly measured, which is why perhaps in our day they go undervalued:
The West has 16 things to lose [which cannot] be touched, bought or expressed in numbers. It’s not the GDP, it’s not the number of STEM graduates, it’s not the top positions in the charts of the biggest banks. What we can hope is that the bureaucrats and technocrats continue to undervalue how powerful the unmeasurable is. These 16 ideas have survived Napoleon, ended First World War and won against the Nazis. They have survived the Khmer and they have survived Stalinism. Happy fourth of July.
His wording was specific to CSS grid (which I’m also in the process of learning) but was a good articulation of how I also learn new technology:
- “I’m going to learn how to use NEW TECHNOLOGY X to produce something I’m already familiar with.”
- “Huh, I can produce this thing I’m familiar with using NEW TECHNOLOGY X way more efficiently than I ever could before.”
- “—okay, now I’ll try making something with NEW TECHNOLOGY X I’ve never even considered.”
Ethan commenting on the design exercise he often does at conferences and workshops of printing webpages then cutting up the UI into pieces in order to find patterns. An exercise in designing language before any exercise in designing UIs can be critical to success. Words have meaning.
the primary benefit to creating a pattern library isn’t the patterns themselves...But rather...the language used to name, organize, and find patterns is what allows [us] to use those patterns effectively—and that is what creates more consistent designs...the words we use to talk about our design are, well, valuable design tools themselves.
Thoughts spurred by Google’s Duplex:
Although chatbots have been presented as a means of humanizing machine language — of adapting computers to the human world — the real goal all along has been to mechanize human language in order to bring the human more fully into the machine world. Only then can Silicon Valley fulfill its mission of capturing the entirety of human experience as machine-readable, monetizable data.
As always, great thoughts from Frank. Everything in this article is great. I could’ve copy/pasted the entire article, but instead I tried to practice some constraint and only copy/paste the stuff that really stuck out to me (honestly though, it’s all good, go read it). Emphases are mine.
On feedback being an art:
clients, co-workers, and bosses aren’t practiced in analyzing design, and designers, while well-versed in giving feedback, are often less experienced in how to productively receive it. Feedback should be a liberal art for everyone.
On gut reactions:
One particularly tricky aspect of criticizing design is that a lot of the work is meant to be quickly read (like logos) or intuitively understood (like interfaces and websites). Does this validate gut reactions or hot takes? I’m uncertain, but it can shift power towards the people who are the least invested in the process.
On design ridicule:
Any defining characteristic of the work will probably be the subject of ridicule.
On the need for specificity:
Praise is meaningless without specificity...A robust feedback process must be specific in its praise, because succeeding is enhancing good choices as well as fixing mistakes.
And a great quote from Michel Foucault on “scintillating” design criticism:
Criticism that hands down sentences sends me to sleep; I’d like a criticism of scintillating leaps of imagination. […] It would bear the lightning of possible storms.
A few excerpts I found interesting from this extended interview with Jony Ive from Hodinkee (which touts itself as a “preeminent resource for modern and vintage wristwatch enthusiasts”).
First: Ive points outs the interesting parallels between the evolution of personal computing and personal time telling (which I had never noticed before):
I think there is a strong analog to timekeeping technology here for our own products and computational devices. Think about clock towers, and how monumental but singular they are. They are mainframes. From there, clocks moved into homebound objects, but you wouldn’t have one in every room; you might have one for the whole house, just like PCs in the 1980s. Then maybe more than one. Then, time-telling migrated to the pocket. Ultimately, a clock ended up on the wrist, so there is such a curious connection with what we wanted to do, and that was a connection we were really very aware of.
Ive made this observation on Apple’s mindset when approaching a product: it’s not just the destination, it’s all you learn along the journey (emphasis mine):
It was fairly clear early on that we wanted to design a range of products, without getting too convoluted, that would broaden how relevant we were. And working in gold and ceramic was purposeful – not only to expand who Apple is, but also from a materials science perspective. As you know, at the end of any project, you have the physical thing (the watch in this case), and then you have all that you have learned. We are always very mindful that the product not be all that we have in the end, and the Edition yielded much to us. We have now worked with ceramic and with gold, and our material sciences team now understands these fundamental attributes and properties in a way they didn’t before. This will help shape future products and our understanding of what forms make sense.
Ive’s respect for companies (and I think by extension, people) who are willing to buck outside pressures in order to be true to their inner compass, which leads to producing something fully unique to their specific characteristics and traits (which no one else in the world can authentically reproduce):
I have so much respect for many of those other brands – Rolex, Omega – because there is the remarkable longevity combined with such an obvious and clear understanding of their own unique identity. It’s rare but inspiring when you see the humble self-assurance of a company that ignores short-term market pressures to pursue their own path, their own vision. Their products seem to testify to their expertise, confidence, and quiet resolve. Their quality and consistency is rightly legendary.
I just really liked this comment in particular and wanted to make note of it:
A person is not a brain driving a meat robot; it all runs together. If work is stymied, ask: are you eating clean? Getting enough sleep? Did your heart pump more than a sloth today? Start with your body, not your work methods. Trust me.
An entertaining opinion on the state of Bitcoin and its parallels to the early days of the web.
This particular quote I enjoyed as it points out what I like to think of as the Jurassic Park / Ian Malcom problem: we go so fast questioning “can we do this?” that we often don’t stop to think “should we do this?” until way after we’ve already encountered the ramifications:
the frameworks are coming to build such tools and make them anonymous and decentralized, so that they might endure, and, as with all internet things, they’ll arrive well ahead of the ethics we need to make sense of them.
Going along with another great post from Jeremy Keith, Ethan comments on the on-going controversy around Google’s continuing attempts to promote proprietary technologies (over open ones) with the AMP project. He draws an interesting parallel with the political climate today in America:
[the] trend of corporations-as-activists is the result of an ongoing politicization of the public sphere, which is itself the result of a government that’s unable (or unwilling) to serve its citizens
Then concludes:
the creation of AMP isn’t just Google’s failure, but our failure...of governance of our little industry. Absent a shared, collective vision for what we want the web to be—and with decent regulatory mechanisms to defend that vision—it’s unsurprising that corporate actors would step into that vacuum, and address the issues they find.
And once they do, the solutions they design will inevitably benefit the corporation first, and the rest of us second. If at all.
I thought the author’s comment here on Twitter’s response to banning spam bots (whereas Facebook turned to tuning their algorithms so as to not “censor”) was interesting, i.e. “you have the power to shape your own destiny”:
Here is a heavy dose of practical philosophy for you: You know who decides? Those who take responsibility. And those who decide and take responsibility shape their destiny. For those who wait and see other people will decide. This is a moment where Twitter can make precious ground over a seemingly invincible Facebook.
The article is an interesting look at UI design in an automated age. It argues we should clearly differentiate humans and bots in user interfaces, a line which, right now, is mentally taxing at the least and impossible at the most:
Programming an army of bots in a system without checks and balances is economically interesting. It happens because it is cheap.
The UI mockups are interesting. Personally, I would love something like this.
I really like Frank’s insights into the web as a medium. This podcast is no different. Around minute mark 18 though, it gets really interesting.
Ethan asks him the following question:
in some ways you’re sort of trying to frame what a web native aesthetic might be in general for web design...I’d love to hear a little bit more about that, Frank, and just generally how you think about what the web needs as a design medium
I’ve just pulled the entire transcript at that point, because I think everything Frank says is worth pondering (emphasis mine):
One of the reasons that I think so much about what websites should look like, not just in specific terms like, “Oh, I have this client. What should their website look like?” but just in general, what should the experience of going from website to website feel like, it’s mostly grounded in the fact that I sort of see web designers repeating things that we’ve labeled as mistakes before. We did a lot of work...trying to iterate to people the importance of semantics and accessibility on websites, and the benefit to users of having consistent experiences across those website and having those experiences be driven by the interactivity of the browser, right? That’s why Flash websites weren’t so great, because every time you hit a Flash website, you didn’t know how to use it. I see us sort of repeating that at this point. There are sort of these marquee websites that are obviously for marketing but there’s a lot of whizzbangery around it because they’re meant to draw attention. But they have some of those fundamental usability problems that I think those early Flash websites had, and it’s really hard for me to look at them and not see them as cumbersome and bloated—and cool, but I find myself looking at them more than actually using them. Maybe that’s the intent, I’m not really sure.
So, that’s kind of one of the reasons why I was like is there an oughtness?...Is there a way towards making websites that feel like they’re websites? I have a pretty good feeling about what that is and it doesn’t necessarily overlap too much with the whizzbangery that gets a lot of attention. So, wanting to really drill down and say, well, okay, what’s the web’s grain? Well, the web kind of wants you to stack things vertically on top of each other and have quite a bit of text. It wants to be fluid, it wants to scroll vertically, and it wants to probably use flat colors or simple gradients because that’s what’s easy to specify inside of CSS, and also you can take those aesthetic rules and stretch them out to boxes of indeterminate shape or boxes that might change shape based on how somebody’s accessing the website or how much content is sitting inside of that box. So, it’s like what is the aesthetics of fluidity? That’s really what the main question is, and a lot of it is dictated by what the tools make easy for you to do.
So, I think that you can make a perfectly great and serviceable website probably with just, I don’t know, 100 to 150 lines of CSS. It doesn’t take really that much. It doesn’t take a lot of JavaScript or anything like that. The old websites from the 90’s, they still work, their fonts just need to be a little bit bigger and they need to set a max width on their paragraph so it has a nice measure. Other than that, you go back and look at a bunch of the essays by Tim Berners-Lee and you’re like, “Actually, this still holds up. I’m not a big fan that it’s in Times New Roman, but that’s what they had to work with.”
So, that’s what’s interesting to me. It’s taking sort of a principled stance as a starting point, honoring the materials that you’re working with and believing that the web has a grain like how a piece of wood has a grain. You can work against that grain, and that creates interesting work that requires a lot of craftsmanship, but for the most part, if you’re building something, you’re going to want to go with the grain because it’s going to be sturdier, it’s going to be easier for you to work with and typically, hopefully, in the process it will be a little bit more beautiful, too.
We had a conversation about web fonts mostly in that, from a kilobyte perspective, they’re pretty pricey, and there’s all of these logistics to worry about if you want a performant website, about how they load and if you want the flash of unstyled text or using JavaScript to put conditional classes on bodies to change the body font after the fonts load and those sorts of things. My question was just sort of like, well, that’s really easy for other people, but every additional step that they need to take is an extra point of fragility, right? So, I’m just sort of wondering is it worth the effort.
Right now, my website is using two typefaces, one is called Fakt and the other one is called Arnhem, and the fallback immediately after that is San Francisco and Georgia. If I take out the web fonts, I like it nearly as much as if I had the web fonts in there. The vibe of the site changes a little bit, but for the most part most of the typefaces are of the same size, so it isn’t like a world of difference changing the typeface to these fallbacks. So it’s like, well, do I actually need those typefaces in there, or would it just be easier and more stable to have those system fonts being used? I kind of waffle on it, I go back and forth probably every single day, and I decided to leave them in because I was like, well, I bought them, let’s use them. But it is sort of like this interesting question whether these additional assets, what the trade-off for each one of these is. Because every additional element you add to a web page, it costs something, you know? It benefits in some way, but it also costs something, and eventually you’ve got to justify the cost, because we can’t communicate the size of web pages before they’re loaded.
My takeaway: can I be a little more humane in how I talk about code? Rather than “this code sucks”, how about “I can’t understand this code – yet.” The inference being: the problem of the code problem lies with me, the reader, not the original writer.
In a similar vein, at the end of the day humans (i.e. developers) are the real resource of your business not the code. This is because all code rots because business requirements change. When I rewrite code, it’s a sign of adding value to the business, not a sign of failure on part of the previous programmer(s).
What follows is mostly just a brain dump of contents from this article that stuck out to me.
By analogy, plenty of people find reading Homer, Shakespeare, or Nabokov difficult and challenging, but we don’t say “Macbeth is unreadable.” We understand that the problem lies with the reader. We may not have sufficient experience with the language and idioms. We may not have enough historical and cultural context (similar to lacking domain expertise when looking at software). We may not have the patience or desire to invest time learning how to read a challenging book. Wikipedia articles and Cliff’s Notes exist to give tl;dr versions of books to people who can’t or don’t want to read the original. When we observe this tendency in other (non-programming) contexts we may interpret it as laziness or short attention span. When we react this way to code we blame the code and the original programmer.
Programmers usually think that they should focus on writing code. Reading code, especially someone else’s code, seems like grunt work, a necessary evil, often relegated to junior programmers in maintenance roles.
I have personally witnessed (more than a few times) professional programmers dismiss working, production code as “unreadable” and “unmaintainable” after looking at it for just a few minutes.
“Good code is simple” doesn’t actually say anything. My many years of programming experience and business domain expertise gives me a very different idea of “simple” than someone with less experience and no domain expertise looking at some code for a few minutes. What we call “simple” depends on our experience, skills, interest, patience, and curiosity. Programmers should say something when they don’t understand code, but rather than saying “this code sucks” they should say “I can’t understand this code – yet.” That puts the focus on the person who struggles to understand rather than on the code. I agree that code reviews improve code quality and team cohesion, but whether that translates to “simple” code depends on the programmers. Programming teams usually converge on common idioms and style, and that make programming together easier, but that convergence doesn’t mean the code will look readable to an outsider looking at it six months later.
When I understand the code I may think that I know a simpler or more clear way to express it, or I may think that the code only presented a challenge to me because I didn’t have the skills or knowledge or right frame of mind. In my experience figuring code out takes significant time and effort, but when I get through that I don’t usually think the code has fatal readability flaws, or that the original programmer didn’t know what she was doing.
Public code reviews create a kind of programmer performance art, when programmers write code to impress other programmers, or to avoid withering criticism from self-appointed experts.
Better programming comes through practice, study (from books and other code), and mentoring. It doesn’t come from trying to blindly adhere to rules and dogma and cargo cults you don’t understand or can’t relate to actual code.
All code baffles and frustrates and offends a significant subset of programmers.
How do you learn to write readable code? Like learning to write readable English, you have to read a lot.
I don’t generally read hackernews (nor its comments), but for some reason while traveling down the black hole of internet browsing, I ended up in the hackernews comments for this article. I thought these were interesting observations.
One comment
I like Rich Hickey's stance on this: "simple" is objective (antonym: "complex"), whereas "easy" is subjective (antonym: "hard"). Easy depends on skills, interest, patience, curiosity - but simple does not. Simple is about lack of intertwining of concerns. About writing code that can be understood without "bringing in" the entire codebase into your mind. Like, you can understand a pure function just by looking at it (it's simple). If it modifies or uses a global variable, now you have to look at all the places that "touch" that global variable, in order to understand what your function does; the code is thus much more complex.
Another (emphasis mine)
Reading code is about 10x as hard as writing it. It takes more concentration, it's less fun, it's harder, and it doesn't impress anyone. You have to know the language better than the person who wrote it, because not only do you have to understand why the code does what they intended it to, but you also have to understand why the code does other things they didn't intend (a.k.a. bugs). But in my experience, you save your team a lot more time and energy in the long run by preferring to read and understand existing code.
Another (emphasis mine)
There is a lot the context you’ve built up in writing that code that could never fit in the comments (even if thoughts could easily be expressed in words, they would dwarf the code and not directly correspond to it; actually comments can actually make code reading harder in this way). It really isn’t about the language either...but how the problem was defined and understood in the first place, how this understanding was encoded in the software.
Another
I sometimes describe programming as a one-way hash operation on requirements. A lot of information and context gets lost when writing software, and I haven't seen a workable solution to that problem yet.
A really interesting article that looks at the fluctuating meanings behind punctuations in typography:
At its leading edge, punctuation is volcanically active, giving shape to concepts that move far faster than words. Anyone communicating today has seen #topics and #themes and #categories identified this way, using a symbol that was intuitively understood and replicated even before it was first called a hashtag in 2007. The symbol and its meaning are now universally recognized, transcending even the locality of language, but their use is scarcely a decade old — an astounding accomplishment for a bit of lexical fluff
But the “#” symbol has a myriad of different meanings:
Nº was the number sign before # became a number sign, and it refreshingly serves this one and only purpose. Compare the #, which when preceding a number is read as “number” (“#1 in my class”), but when following a number means “pound” or “pounds” If you’re curious what the # symbol has to do with the abbreviation lbs., here’s one possible missing link. (“70# uncoated paper”), leading to printshop pile-ups like “#10 envelope, 24# bond.” To programmers, a # can mean either “ignore what follows” (as in a Python comment) or “use what follows” (when referencing a page fragment, or a Unicode value in html.) To a proofreader, a # means “insert space,” so in the middle of a numbered list, the notation “line #” does not mean “line number,” but rather “add a line space.” Because of #’s resemblance to the musical symbol for “sharp” (♯), it’s a frequent stand-in for the word “sharp,” and often the correct way of rendering a trademarked term such as The C# Programming Language. The # is rapidly assuming musical duties as well, especially in online databases, leading to catalog collisions like “Prelude & Fugue #13 in F#.” How fortunate a designer would be to have a numero symbol, with which to write “Prelude & Fugue Nº 13 in F#,” or “Nº 10 Envelope, 24# bond.”
Conclusion:
the Nº is a reminder that typography exists to serve readers, and that readers do not live by semantic punctuation alone. There’s a place for variety and richness in typography
A great article, worth reading in its entirety.
The principles of this talk are illustrated by code examples of BASH, but I think they are general enough to apply to any programming language. The speaker walks you through his own personal thought process and techniques for understanding and refactoring a piece of code which he did not write himself.
I️ found many useful ideas from his own personal techniques that I’d like to try. His overall goal is to make the code easy to understand and comprehend at a glance. He does this by breaking things up into really small function chunks, so you end up with something like:
// Top of file
DoThing();
DoAnotherThing();
DoLastThing();
// Underneath main execution are the declarations
function DoThing() {...}
function DoAnotherThing() {...}
function DoLastThing() {...}
I also enjoyed this quote around the ~11:30 mark:
Programming is not a moral debate. We’re not talking about evil and good. We’re talking about a process of programming. “Writing terrible code” is probably a misnomer. That so called “terrible code” is code that is experimental and trying to prove something. That’s fine. Prove it. Understand it. Get it to work. Learn from it. Then make it clearer. Make it better. As needed.
I liked this article and agree personally that the metaphor of the human brain as a computer or information processing machine feels intuitively off-base to what I experience (both as a human and, albeit, amateur computer enthusiast and writer-of-code).
I found rather interesting the author’s examples of how human metaphors for the brain's inner workings have changed drastically over time (none yet seem to have adequately explained the phenomenon):
The invention of hydraulic engineering in the 3rd century BCE led to the popularity of a hydraulic model of human intelligence, the idea that the flow of different fluids in the body – the ‘humours’ – accounted for both our physical and mental functioning. The hydraulic metaphor persisted for more than 1,600 years, handicapping medical practice all the while.
The author concludes that, at some future day, the metaphor of the human brain as a computer will seem as ridiculous as this notion of the brain as a hydraulic machine does now. One example of how a computer is a poor metaphor for the mind is found in the idea of “storing memory”:
Misleading headlines notwithstanding, no one really has the slightest idea how the brain changes after we have learned to sing a song or recite a poem. But neither the song nor the poem has been ‘stored’ in it. The brain has simply changed in an orderly way that now allows us to sing the song or recite the poem under certain conditions. When called on to perform, neither the song nor the poem is in any sense ‘retrieved’ from anywhere in the brain, any more than my finger movements are ‘retrieved’ when I tap my finger on my desk. We simply sing or recite – no retrieval necessary.
He continues:
Because neither ‘memory banks’ nor ‘representations’ of stimuli exist in the brain, and because all that is required for us to function in the world is for the brain to change in an orderly way as a result of our experiences, there is no reason to believe that any two of us are changed the same way by the same experience. If you and I attend the same concert, the changes that occur in my brain when I listen to Beethoven’s 5th will almost certainly be completely different from the changes that occur in your brain. Those changes, whatever they are, are built on the unique neural structure that already exists, each structure having developed over a lifetime of unique experiences.
If you start paying attention, you’ll notice how pervasive the metaphor of “human brain as a computer” is. In fact, I admit (somewhat ashamedly) that when my first son was born I was kind of in shock that during his nine months of development in the womb, there wasn’t some kind of bug that worked its way into his biological system (a weird nose, an immune deficiency, etc). Thousands of babies must be born everyday, most of them likely in “normal working order”. How is that possible? I can’t even ship a single piece of code without some underlying a bug. I suppose I’ve been spending too much time writing software.
A well-articulated set of arguments for why the folks at IA ship their plain text editor with only a monospaced font:
In contrast to proportional fonts that communicate “this is almost done” monospace fonts suggest “this text is work in progress.” It is the more honest typographic choice for a text that is not ready to publish...The typographic rawness of a monospace font tells the writer: “This is not about how it looks, but what it says. Say what you mean and worry about the style later.” Proportional fonts suggest “This is as good as done and stand in an intimidating contrast to a raw draft.”
I wonder if that’s why there’s so many bugs in software: we’re subconsciously believing it’s always a work in progress? Well, the folks at IA address the programmers and monospaced fonts later:
Programmers use monospaced fonts for their indentation and because it allows them to spot typos. In a perfectly regular horizontal and vertical raster, letters and words become easily discernible
But is there a balance between a proportional font and a monospaced one?
This year, again, we set out exploring our own writing font. We started from scratch, moved from proportional to monospace to three spaces and ended up with duospace...Progressively, we came to realize that the right question is how to make a proportional font look like a monospace, but how many exceptions you allow until you lose the benefits of a sturdy monospace.
And here’s the why behind exploring duospace:
The advantage over proportional fonts is that you keep all benefits of the monospace: the draft like look, the discernability of words and letters, and the right pace for writing. Meanwhile, you eliminate the downside stemming from mechanical restrictions that do not apply to screen fonts.
Honesty, I’d like to see more blog posts like this. These kinds of observations (and their implications) get brushed over too frequently. In my opinion, the author is trivially breezing over a topic that could results in the ultimate regret at the end of his life:
As a so-called HCI (Human Computer Interaction) designer, I know that using a computer I am, in fact, communicating with a computer. I communicate with computers all day long. I know that, most of the time, I talk to something that has no body, no feelings, and no understanding...I mostly use the computer as a tool to talk to other humans. I structure interfaces and write text that I share with other humans. I communicate more with computers than with my kids. I caress my iPhone more often than my kids. This is a bit sad. Maybe it’s very sad. But, hey, most people spend more time at work than with the family! Spending time with my computers, I support my family. And, hey, eventually, my words and designs will reach other human beings. I know that what I do on my computers will be felt by humans in some way. I fear that on my death bed I might regret these words as much as what they try to deny. But, hey… There is a difference between communicating through computers and communicating with computers.
He also touches on that nagging concern many of us in tech have that what you do becomes worthless in a matter of years months:
Spending time with computers we still risk that all the energy we invested in communicating with them disappears into that little black electric holes that used to eat our Word documents. When we talk to computers, we risk dying a little, as we lose time to the possibility that all our energy turns to zeroes.
Conclusion:
Just pay attention to not pour half your life into the digital void.
Great analogy:
One of the first lessons you learn as a young traveler is when you go to a faraway country: avoid the people that call you on the street. “Massage?” “Hungry?” “Need a guide?” Only noobs follow the hustlers. You find a quiet spot and research where to go. Then you go there and then go further. Same thing when you travel on the Web. Don’t get lured in. Find a quiet spot and research and then go there. And then go further...Things pushed in our stream through an algorithm tailored to our weakness are the digital equivalent of the calls that try to lure you in when you walking down a street in Bangkok.
Also, I thought this was a rather interesting (and funny) observation on how younguns view URLs. Apparently, this was a conversation that happened:
11-year-old: “What is this strange stuff on the Milk package?”
Dad: “This strange stuff is a URL.”
11-year-old “What does it say?”
Dad: “It’s an Internet address.”
11-year-old “Address of what?”
Dad: “Of a Website. It’s used in the browser—you put it in that field on top and then you go to a Website.”
11-year-old “What is the browser?“
The author’s observation on this conversation is that:
The browser now is just another app...Apps bring him there sometimes. To a chatting teen, the address bar is a cousin of the terminal.
Just happened to be listening to some Cake the other day and the song “No Phone” came on. To be honest, I kind of sat marveling that this song was written in 2003, way before smartphones came along. Seems incredibly prescient. Here’s some lyrical excerpts:
No phone No phone I just want to be alone today
...
Ringing stinging
Jerking like a nervous bird
Rattling up against his cage
Calls to me throughout the day
...
No phone No phone I just want to be alone today
...
Rhyming chiming got me working all the time
Gives me such a worried mind
...
No phone No phone I just want to be alone today
...
Shaking quaking
Waking me when I'm asleep
Never lets me go too deep
Summons me with just one beep
The price we pay is steep
...
My smooth contemplations will always be broken
My deepest concerns will stay buried and unspoken
...
No phone No phone I just want to be alone today
This article is an exhaustive look at computer latency over the last few decades. The conclusion? Modern computers are significantly slower in keyboard-to-screen response time.
the default configuration of the powerspec g405, which had the fastest single-threaded performance you could get until October 2017, had more latency from keyboard-to-screen than sending a packet around the world.
Though the article deals specifically with detailing the degradations in latency of a keypress on computer hardware over the past few decades, I found this observation to be eerily similar to what’s happening with the degradations in speed of the web over the past few decades:
The solution to poor performance caused by “excess” complexity is often to add more complexity. In particular the gains we’ve seen that get us back to the quickness of the quickest machines from thirty to forty years ago have come not from listening to exhortations to reduce complexity, but from piling on more complexity.
Data only answers what, not why:
At Booking.com, they do a lot of A/B testing.
At Booking.com, they’ve got a lot of dark patterns.
I think there might be a connection.
A/B testing is a great way of finding out what happens when you introduce a change. But it can’t tell you why.
More:
The problem is that, in a data-driven environment, decisions ultimately come down to whether something works or not. But just because something works, doesn’t mean it’s a good thing.
If I were trying to convince you to buy a product, or use a service, one way I could accomplish that would be to literally put a gun to your head. It would work. Except it’s not exactly a good solution, is it? But if we were to judge by the numbers (100% of people threatened with a gun did what we wanted), it would appear to be the right solution.
Love the picture he paints with the “gun-to-head” example. Though often mistakenly interpreted otherwise, data is only one piece of a multi-faceted story.
Jeremy responding to the commonly-held assertion that the web is a primitive technology because it was just designed for sharing documents.
If the web had truly been designed only for documents, [rich interactive applications] wouldn’t be possible.
The author asks: inflight smoking has been banned in commercial airlines for the last twenty decades but airplanes are still fitted with ashtrays, why? Because rules aren’t going to stop 100% of people. The author summarizes this sentiment from the airlines manufacturers: “We absolutely do not want people to smoke on board, but if they do, then we need to handle the fallout from it in the safest way possible.” It’s about pragmatism.
How does this relate to writing code?
ten years of being a developer has taught me that, sometimes, doing the wrong thing is the right thing to do
Also:
When a team cannot bend the rules of your system or framework, they’ll often opt to simply not use it at all. This is a net loss, where allowing them to do the wrong thing would have at least led to greater adoption, more consistency, and less repetition.
The conclusion being:
Whenever you plan or design a system, you need to build in your own ashtrays; a codified way of dealing with the inevitability of somebody doing the wrong thing.
I liked this reminder that things don’t always work as expected:
If you build and structure applications such that they survive adverse conditions, then they will thrive in favourable ones. Something I often tell clients and workshop attendees is that if you optimise for the lowest rung, everything else on top of that comes for free.
An interesting look at the value proposition of PureScript.
What I found really interesting here were the function signatures because they afford you so much by merely glancing at them, especially when your code has side effects. Here’s the example the presenter uses:
-- Pure
summariseDocument :: String -> String
-- Needs network
fetchDocument :: DocumentId -> Eff (ajax :: AJAX) String
-- Needs a browser
renderDocument :: String -> Eff (dom :: DOM) ()
This lets you keep track of what pieces of your code can run by themselves and which require other systems (servers, databases, etc) in order to run.
[PureScript] takes you to the future of large scale front-end code reliability. [Now I’ve written systems in PureScript] and every one of those systems in recent years has needed a little bit of JavaScript. Maybe five to ten percent of the codebase is JavaScript. All of my bugs, all of my runtime exceptions, all of my problems, come from that five to ten percent. The rest [of the codebase] is rock solid. I worry about logical errors but I don’t worry about the reliability of my code anymore on the front-end.
I really enjoyed this quote. It speaks directly to improving your language skills, but I think can more broadly be applied to just about any skill at which you wish to improve (emphasis mine):
Not just reading a lot, but paying attention to the way the sentences are put together, the clauses are joined, the way the sentences go to make up a paragraph. Exercises as boneheaded as you take a book you really like, you read a page of it three, four times, put it down, and then try to imitate it word for word so that you can feel your own muscles trying to achieve some of the effects that the page of text you like did. If you're like me, it will be in your failure to be able to duplicate it that you'll actually learn what's going on. It sounds really, really stupid, but in fact, you can read a page of text, right? And “Oh that was pretty good…” but you don't get any sense of the infinity of choices that were made in that text until you start trying to reproduce them.
Nicholas Carr at it again.
The smartphone has become a repository of the self, recording and dispensing the words, sounds and images that define what we think, what we experience and who we are.
He speaks about an interesting test that was done on cognition and how the results showed that if your phone was even near you, you scored less (emphasis mine):
The results were striking. In both tests, the subjects whose phones were in view posted the worst scores, while those who left their phones in a different room did the best. The students who kept their phones in their pockets or bags came out in the middle. As the phone’s proximity increased, brainpower decreased.
The most interesting part is that they didn’t even know it:
In subsequent interviews, nearly all the participants said that their phones hadn’t been a distraction—that they hadn’t even thought about the devices during the experiment. They remained oblivious even as the phones disrupted their focus and thinking
I think we can all admit it’s tough:
Just suppressing the desire to check our phone, which we do routinely and subconsciously throughout the day, can debilitate our thinking.
Perhaps we’re not as in control as we think:
The evidence that our phones can get inside our heads so forcefully is unsettling. It suggests that our thoughts and feelings, far from being sequestered in our skulls, can be skewed by external forces we’re not even aware of.
But is it really that big of a surprise our phones have such a hold on us?
Imagine combining a mailbox, a newspaper, a TV, a radio, a photo album, a public library and a boisterous party attended by everyone you know, and then compressing them all into a single, small, radiant object. That is what a smartphone represents to us. No wonder we can’t take our minds off it.
Another problem is that we offload remembering information to the computer because we have search engines available to us 24/7. But that is diminishing our own personal knowledge. Plus, as was found in a study “when people call up information through their devices, they often end up suffering from delusions of intelligence. They feel as though ‘their own mental capacities’ had generated the information, not their devices.” So what do we do?
Only by encoding information in our biological memory can we weave the rich intellectual associations that form the essence of personal knowledge and give rise to critical and conceptual thinking. No matter how much information swirls around us, the less well-stocked our memory, the less we have to think with.
The conclusion? (emphasis mine):
That insight sheds light on society’s current gullibility crisis, in which people are all too quick to credit lies and half-truths spread through social media. If your phone has sapped your powers of discernment, you’ll believe anything it tells you.
At the end of the day, all our phones can give us is data, but we often misperceive that as knowledge:
Data, the novelist and critic Cynthia Ozick once wrote, is “memory without history.” Her observation points to the problem with allowing smartphones to commandeer our brains. When we constrict our capacity for reasoning and recall or transfer those skills to a gadget, we sacrifice our ability to turn information into knowledge. We get the data but lose the meaning. Upgrading our gadgets won’t solve the problem. We need to give our minds more room to think. And that means putting some distance between ourselves and our phones.
Generally enjoy these talks by Rich Hickey, even though a lot of the time he’s talking about programming concepts beyond my understanding. What I do enjoy is his ability to describe problems in mainstream programming and ask “Wait, why are we doing this? We’re making things so hard for ourselves!”
Here’s just one quote I enjoyed from his talk:
When you look at programming languages, you really should look at: what are they for? There’s no inherent goodness to programming languages. Only suitability constraints.
My feelings precisely:
Web development used to be a lot simpler. If I wanted to test a library or hack together a quick demo, I could just <script src=”some-library”>
.
I could reload the page instead of re-compiling a bundle. I could load a static page instead of running a development server.
Our default workflow has been complicated by the needs of large web applications. These applications are important but they aren’t representative of everything we do on the web, and as we complicate these workflows we also complicate educating and on-boarding new web developers.
Some of the web components in the examples are pretty cool. I hope this future really is as near as the author says.
Edit: I dove into web components a bit after seeing this article. They’re pretty cool and it feels good to be “using the platform” of the web. However, I still really love the declarative nature of React vs. the imperative nature of web components. Maybe I’ll write more about this in the future. (Who am I kidding? That post probably isn’t going to happen.)
Good article making the rounds in technology circles about how unreliable code can be.
Human intuition is poor at estimating the true probability of supposedly ‘extremely rare’ combinations of events in systems operating at a scale of millions of requests per second. That human fallibility means that some of the more subtle, dangerous bugs turn out to be errors in design; the code faithfully implements the intended design, but the design fails to correctly handle a particular ‘rare’ scenario.
Stupid computers. Always doing precisely what you tell them to instead of catching the gist of your commands. Do what I mean, not what I say!
Question for Dan: “What have you learned after working at Facebook for almost two years?”
He gave a response with a number of bullet points, but this one stood out to me:
Think about code in time. Don't stop at thinking about the code as it is now. Think about how it evolves. How a pattern you introduce to the codebase might get copy and pasted all over the place. How the code you're spending time prettying up will be deleted anyway. How the hack you added will slow down the team in the long run. How the hack you added will speed up the team in the short term. These are tradeoffs, not rules. We operate on tradeoffs all the time and we must always use our heads. Both clean and dirty code can help or prevent you from reaching your goals.
One of my favorite critics of modern technology, Nicholas Carr, is at it again. This time questioning the culture behind AI-powered home robots like the Echo:
Whether real or fictional, robots hold a mirror up to society. If Rosie and her kin embodied a 20th-century yearning for domestic order and familial bliss, smart speakers symbolize our own, more self-absorbed time.
It seems apt that as we come to live more of our lives virtually, through social networks and other simulations, our robots should take the form of disembodied avatars dedicated to keeping us comfortable in our media cocoons. Even as they spy on us, the devices offer sanctuary from the unruliness of reality, with all its frictions and strains. They place us in a virtual world meticulously arranged to suit our bents and biases, a world that understands us and shapes itself to our desires. Amazon’s decision to draw on classical mythology in naming its smart speaker was a masterstroke. Every Narcissus deserves an Echo.
The older I get, the more every problem in tech seems to be a matter of getting humans to work together effectively, and not tech itself.
The author begins with this quote:
What the pupil must learn, if he learns anything at all, is that the world will do most of the work for you, provided you cooperate with it by identifying how it really works and aligning with those realities. If we do not let the world teach us, it teaches us a lesson. — Joseph Tussman
Then adds this comment:
Leverage amplifies an input to provide a greater output. There are leverage points in all systems. To know the leverage point is to know where to apply your effort. Focusing on the leverage point will yield non-linear results.
I stumbled on this article when I was reading “The Difference Between Amateurs and Professionals” which stated the following:
Amateurs believe that the world should work the way they want it to. Professionals realize that they have to work with the world as they find it.
Most modern devices have RAM measured in gigabytes and any type of closure scope or property lookup is measured in hundreds of thousands or millions of ops/second, so performance differences are rarely measurable in the context of an application, let alone impactful.
Then later:
In the context of applications, we should avoid premature optimization, and focus our efforts only where they’ll make a large impact. For most applications, that means our network calls & payloads, animations, asset caching strategies, etc… Don’t micro-optimize for performance unless you’ve noticed a performance problem, profiled your application code, and pinpointed a real bottleneck. Instead, you should optimize code for maintenance and flexibility.
Always interesting insights from Jeremy:
I’ve written about seams before. I really feel there’s value—and empowerment—in exposing the points of connection in a system. When designers attempt to airbrush those seams away, I worry that they are moving from “Don’t make me think” to “Don’t allow me to think”.
In many ways, aiming for seamlessness in design feels like the easy way out. It’s a surface-level approach that literally glosses over any deeper problems. I think it might be driven my an underlying assumption that seams are, by definition, ugly. Certainly there are plenty of daily experiences where the seams are noticeable and frustrating. But I don’t think it needs to be this way. The real design challenge is to make those seams beautiful.
An interesting overview on the state of React and where it’s headed, especially in regards to React Fiber and its “cooperative multitasking” feature.
The speaker does a really good job explaining the current problem React has due to the single-threaded nature of Javascript, which essentially boils down to this: it doesn’t matter how efficient your code is if you end up scheduling a lot of it in an uninterrupted sequence. React Fiber attempts to solve this through a more asynchronous approach to component rendering.
It’s an interesting look at where we are with React and where we might go in the future.
This is a couple years old now, but I found Frank’s “lessons learned” insightful:
- Life isn’t a story.
- A lot of things don’t need to be intellectualized: “because I want to” is often a good enough reason.
- Empathy is first an act of imagination.
- Don’t take business advice from people with bad personal lives.
- There are two ways to look at your life: what happened to you or what you did.
- Resources don’t replace will.
- Lazy trumps smart.
- Everybody wants to give advice and no one wants to take it.
- We only deserve what we can take care of.
- Clearly labeling other people’s petty grievances as bullshit is a fast track to well-being and fewer complaints of your own.
- Money is circulated. Time is spent.
- You can punch back.
- Social media gets less annoying if you’re willing to say to people, “Who the hell do you think you are?”
- Pain is unavoidable. Suffering is optional.
- Who you are has more to do with how you act and what you love than what you have or say.
- It’s more complicated than that.
- Everything good I have came from honesty, good intentions, and low expectations.
- Stick with the attentive ones.
- Find a way to forgive your mistakes.
- You’ll never know enough. Oh well.
This is a Q&A article I stumbled on that has some good pieces of advice in it.
First, I liked this point on the absolutist terms we so often use in conversations: “oh, we have to use X because it’s declarative”. Declarative compared to what? These arguments should be more specific.
We cannot talk about everything in absolute terms. Compared to assembly code, C is declarative. But compared to transistors and gates, assembly code is declarative. Developers need to recognize these levels of abstractions.
I also liked the metaphor of computer tools being an extension of your mind:
Good developers understand that they can't do everything, and they know how to leverage tools as prosthetics for their brains.
Some interesting advice on how to find your way between theory and practice (as alway the answer seems to lie somewhere in the middle):
focus at the intersection of theory and practice. There is no progress without friction. It is easy to dive into theory, or all the way into just practice—but the real interesting work happens between theory and practice. Try to understand both sides. The safe spot is to retreat to one of the extremes.
Remember: there is no silver bullet. Your processes alone won’t save you:
With prescriptive processes, people are looking for a silver bullet to solve problems, but it doesn't exist...The world is super-confusing, and you have to embrace it and work with it.
Lastly, I love this bit about finding questions before answers. I often have to remind myself of this before digging into any project:
first focus on finding the right questions, and then the answers.
There were two good reminder pieces for me in this article:
Errors are the friends of developers, not enemies.
And
It helps to think of errors as built-in failure cases. It’s always easier to plan for a failure at a particular point in code than it is to anticipate failure everywhere.
A page full of gifs depicting interaction paradigms for designing data tables. Kind of an interesting little summary.
Terry Crowley, a Microsoft engineer who lead Office development for over the last decade, reflects on the complexity of building software: from planning releases to technical strategy to dealing with market competition. There were two parts that stuck out to me I wanted to note:
First, he comments on how Google came into the "office productivity" space and applied pressure to what Microsoft was doing by having a suite of tools purely in the browser, available to any platform with an OS. Though he thinks it's too early to see how this will all play out, he does believe the complexity Microsoft has endured in building native Office tools (as opposed to Google's web-based tools) may end benefiting them. Googles apps can't compete in terms of functionality and features because building software as rich as Office in the browser just isn't feasible.
the performance challenges with running large amounts of code or large data models in the browser and managing the high relative latency between the front and back end of your application generally make it harder to build complex applications in a web-based environment. Hyper-ventilation by journalists and analysts about the pace of Google App’s innovation generally ignored the fact that the applications remained relatively simple...I knew the pace of innovation that was possible when functionality was still relatively low ...and nothing I saw as Google Apps evolved challenged that.
In the end, the complexity of the Office software suite ends up acting as a “moat” against the attacks of competitors that simply can’t be crossed without the years of experience Microsoft has gained building this kind of software.
Competitive strategy argues that when a competitor attempts to differentiate you need to focus on neutralizing that differentiation as quickly as possible... It is clear the Office apps would not be positioned functionally the way they are now (with fully compatible native and web clients on all devices and support for offline and co-editing) if there had been any squeamishness about embracing the challenges of complexity. That complexity (as it embodies valued functionality) is the moat.
It’s interesting to think of this complexity, rather than being an liability to the business, when viewed and handled correctly, being an asset and competitive advantage.
Number two:
The dynamic you see with especially long-lived code bases like Office is that the amount of framework code becomes dominated over time by the amount of application code and in fact frameworks over time get absorbed into the overall code base. The framework typically fails to evolve along the path required by the product — which leads to the general advice “you ship it, you own it”. This means that you eventually pay for all that code that lifted your initial productivity. So “free code” tends to be “free as in puppy” rather than “free as in beer”.
I find that a very interesting long-term observation on leveraging frameworks in your codebase. You always see the benefits up front. But in the long run frameworks are “free as in puppy”: once the initial joy has subsided they leave you with responsibility.
I have to admit, when I first started reading this and the author was framing some important questions, I felt like I was going to barf a little when it seemed he was going to give a definitive answer for each (like almost every article on the internet it seems). But then he didn't. It was so refreshing. It reminded me how little things like that make me love Frank’s writing. The question he frames: is going off on your own worth it?
Well, I am here to offer a resounding maybe.
Frank is always marrying paradoxes, which is what makes great writing in my opinion. Like this other part:
How can we be independent together?
Independent together? Resounding maybe? Jumbo shrimp? These are great paradoxes stacked against each other and in proving contraries you find the truth. As Frank points out later in his article “independence is always supported by interdependence."
Now about employment:
Many people presume that employment is the opposite of independence, and that endlessly irritates me. It’s so short-sighted. History shows a long record of artists who did “normal” work to support their creative practice.
He points out many of the famous artists and writers whose work that is now famous today were “side projects” from their daily employment.
There’s one other important benefit to the unrelated day job: when it comes to your art, you don’t have to take any shit from anybody. You can honor any creative impulse because your paycheck is never on the line. Go nuts, make crazy shit. What’s more independent than that.
That’s one reason I’ve personally never liked contract work on the side, or even writing tutorials now. I feel like I have to finish all those things and sometimes I just don't want to. I want to explore as far as I want to go and stop when I want. A day job affords me that because my side projects can be whatever I want whenever I want. I never thought of that, but that is freedom.
Along these lines there is also great quote for this Krista Tippet:
I worry about our focus on meaningful work. I think that’s possible for some of us, but I don’t want us to locate the meaningfulness of our lives in our work. I think that was a 20th-Century trap. I’m very committed and fond of the language of vocation, which I think became narrowly tied to our job titles in the 20th Century. Our vocations or callings as human beings may be located in our job descriptions, but they may also be located in how we are present to whatever it is we do
That last line is fantastic: finding meaning and a calling might be found in being present in whatever we do, be that our job, parenting, or just being a friend. As Frank goes on to comment, “meaning comes from a way of being”:
When Campbell told us to follow our bliss, he wasn’t telling everyone to chase their dreams until they became careers. He said it as a call for people to pursue a vocation as Krista Tippett has defined it. Vocation is as much about who you are and how you are as it is about what you do. Bliss is an attitude, a disposition, so meaning comes from a way of being and is not a consequence of producing work. You make the art, the art does not make you.
One last great point:
I mistake the work’s flaws for my own. Perhaps that’s something many of us have in common. The way to approach this issue is clear: we must acknowledge we are involved in our unsteadiness, but believe we are only part of its reason. If we allow room in our work for serendipity to occur, that same space must also be reserved for misfortune. We are the cause of neither.
A quote often attributed to Einstein, though apparently sourced from a nameless professor at Yale:
If I had only one hour to solve a problem, I would spend up to two-thirds of that hour in attempting to define what the problem is.
The more I work in software, the more I realize this is the way to go.
I don’t know how they get numbers like this, but it’s an interesting figure:
Thanks to the Internet and cellular networks, humanity is more connected than ever. Of the world’s 7 billion people, 6 billion have access to a mobile phone — a billion and a half more, the United Nations reports, than have access to a working toilet.
All of this interconnectivity was supposed to foster tolerance. The more we knew of someone, the more we would like them. Or at least tolerate them. Carr points out that assumption isn’t new. It’s been proclaimed by many western thinkers since the invention of the telegraph—and radio, TV, phone and Internet were only supposed to make it better. And yet, in some ways, they didn’t.
Yet we live in a fractious time, defined not by concord but by conflict. Xenophobia is on the rise. Political and social fissures are widening. From the White House down, public discourse is characterized by vitriol and insult. We probably shouldn’t be surprised
He cites some research done by psychologist in 2007 around people who lived in the same apartment building and they found that:
as people live more closely together, the likelihood that they’ll become friends goes up, but the likelihood that they’ll become enemies goes up even more...The nearer we get to others, the harder it becomes to avoid evidence of their irritating habits. Proximity makes differences stand out.
Social networks seem to only amplify this effect:
Social networks like Facebook and messaging apps like Snapchat encourage constant self-disclosure. Because status is measured quantitatively online, in numbers of followers, friends, and likes, people are rewarded for broadcasting endless details about their lives and thoughts through messages and photographs. To shut up, even briefly, is to disappear. One study found that people share four times as much information about themselves when they converse through computers as when they talk in person.
Work along these lines from British scholars in 2011 concluded:
With the advent of social media it is inevitable that we will end up knowing more about people, and also more likely that we end up disliking them because of it.
That rings true every time you hear someone talk complain about all the racist, hateful, stupid garbage from acquaintances in their Facebook feed. I agree with Carr that, at then end of the day, you’ll never be able to build a global community of harmony with only software.
[there’s an idea], long prevalent in American culture, that technological progress is sufficient to ensure social progress. If we get the engineering right, our better angels will triumph. It’s a pleasant thought, but it’s a fantasy. Progress toward a more amicable world will require not technological magic but concrete, painstaking, and altogether human measures: negotiation and compromise, a renewed emphasis on civics and reasoned debate, a citizenry able to appreciate contrary perspectives. At a personal level, we may need less self-expression and more self-examination.
Love that last line. It’s worth repeating:
we may need less self-expression and more self-examination
Carr concludes:
[technology doesn’t] make us better people. That’s a job we can’t offload on machines.
Some thoughts about the direction of JavaScript, specifically how to remove redundant constructs of today’s javascript from the language and leave ourselves with fewer methods of expression. You might think having fewer methods of expression would be a bad thing, but he argues that fewer is actually better because it lowers the cognitive dissonance you encounter when running into two things that are mostly similar but not identical, which you then have to expend mental energy differentiating (he illustrated this with an analogy to purging your life of things, which I thought was interesting). It’s a parallel for Garrett’s law, which goes something like this: if two things are similar, either 1) accentuate the differences and make them more different, or 2) eliminate the differences and make them identical.
Here are some examples of changes the speaker recommended:
- Tabs vs. Spaces
- Get rid of tabs. Not because spaces is “better” but because we have two things doing the same thing. Let’s simplify. And since we can’t get rid of spaces, tabs has to go.
- Double Quote
"
vs. Single Quote '
- Get rid of single quotes (as it’s already overload in use as an apostrophe) to do things like encapsulate strings and use double quotes exclusively.
null
vs undefined
- Get rid of
null
as an empty pointer and use undefined
solely (he recommends leveraging null
in the future as an empty object).
He goes into depth on other idiosyncrasies of JavaScript and how he would fix them. Things like what 0 / 0
should equal and why (0.1 + 0.2 === 0.3)
returns false. It was an interesting talk and I found the metaphor for removing clutter from your life an interesting parallel to the argument for removing redundancies from the langauage of JavaScript. Obviously you can’t just remove them now, but from a perspective of personal code writing, it’s an interesting argument I may try out in practice.
Two important questions when building software:
- How well does it work?
- How well does it fail?
Great example with CSS shapes. In browsers that don’t support it, if you use it, you just get a regular fallback.
Service workers are another good example. When somebody loads your webpage for the first time, they download images, html, css, etc, and a service worker. Then on any subsequent request, the service worker does it’s work. Note that every browser, the first time your page is loaded, doesn’t support service workers because they haven’t been downloaded yet. So you start out building under the assumption that your site doesn’t have a service worker, then you enhance from there.
It’s a really good way to build your UI’s, by building them around failure cases.
An interesting read which presents a challenge to the traditional mobile first thinking. The author contrasts “mobile first” design philosophy (at least as one of its definitions) to an analogy of physical product design:
If “Mobile First” design philosophy were applied in the domain of physical product design, the implication would be that you should design the compact, multi-tool screwdriver first. The compact design could then be used to inform the design of a larger version. Why? Because it allegedly is best to ladder-up in complexity (see, progressive enhancement vs. graceful degradation). This idea is, however, is based on the assumption that there is a consistent, linear relationship between complexity and form.
I like the challenge he presents to the assunmption that “there is a consistent, linear relationship between complexity and form”. For some websites I think staring at mobile first and building up linearly works just fine. For others, however, I think it leaves much lacking.
It’s human nature: I over-value where I have influence. Since I am a designer, this frequently means placing too much emphasis on how things look and work rather than the direction they are pointed. But reflecting on the other side of the issue is also interesting: I find that the more input I have in the content and strategy of the project, the less burden I place on the aesthetics. Perhaps this is because I believe the aesthetic of the work should be an extension of its objectives, so if you get the strategy right, the look follows. Since I like to tackle problems sideways, I must risk being plain and rely on direct visuals to keep the work comprehensible.
And this next part is good:
I am for a design that’s like vanilla ice cream: simple and sweet, plain without being austere. It should be a base for more indulgent experiences on the occasions they are needed, like adding chocolate chips and cookie dough. Yet these special occasions are rare. A good vanilla ice cream is usually enough. I don’t wish to be dogmatic—every approach has its place, but sometimes plainness needs defending in a world starved for attention and wildly focused on individuality. Here is a reminder: the surest way forward is usually a plain approach done with close attention to detail. You can refine the normal into the sophisticated by pursuing clarity and consistency. Attentiveness turns the normal artful
More:
the longer we spend in contact with the products of design, the more their willful attempts at individualism irritate us.
The danger of redesigning your brand to current trends:
Many believe that normalcy and consistency breads monotony, but what about the trap of an overly accentuated, hyper-specific identity? When the world changes around you, what do you do?
This is often true of personal portfolios that strive to be different, but in reality, when you're sorting through tons of resumes you're looking for the content before the individuality. The individuality are like fireworks, they may catch your attention for a second, but once that attention is grabbed, if the content is confusing, hard to read, hard to digest, you've failed.
All contain the aching desire to be noticed when instead they should focus on being useful.
Interesting look at how, over time, the curtain has been pulled back on Apple’s magic. Some of this article is meh, but there are a few good pieces in here.
On how we took the wrong things away from Apple and Steve’s methods:
Designer’s took all the wrong ideas away from his presentations. Big reveals were marketing techniques, not methods to surprise our internal product teams. Sexy interfaces were inspirational, not things we blindly copy without consideration for users. Going against the grain was a way to inspire people, not an excuse to shun the ideas of our coworkers . Secrecy was a business technique, not a reason for us to hide and design solo in our computers. Spurring focus groups encouraged risk taking, not give us a reason to avoid learning from our customers.
But now the curtain has been pulled back. We know the truth. We’ve known it the whole time from our own experiences. We just didn’t want to admit it. It’s like when you see celebrity news and realize “oh, they put their pants on one leg at a time like the rest of us”. It’s time to reshape our own thinking and processes:
It’s time for designers to embrace what really drives amazing products and innovation, connection with other people. The impactful design leader is not a lone genius that locks themselves away only to come back with magic that even they themselves don't fully understand. That’s myth, storytelling. No, the impactful design leader is a facilitator. They bring people together from all parts of their organization, rally them around ideas, and extract the best thinking into small gains that lead to big wins. They are found with people, soliciting feedback from designer and non designer alike. They realize failure is both an inevitable and necessary part of the process. They understand it takes constant iteration and a volume of ideas to get to the right answers. And they don’t have to wear a black turtleneck.
An egghead.io course with lots of useful little tidbits I hadn’t become familiar with yet:
- npm will now load your PATH with your local
node_modules
folder, so you don’t have to specifically reference local modules anymore via the .bin
folder- No more having to do:
"my-script" : "./node_modules/.bin/node-sass input.scss output.css"
- Just reference it normally and it'll look in your local
node_modules
first"my-script" : "node-sass input.scss output.css"
- You can pass arguments to a script using
--
- Compile your CSS
"css" : "node-sass src/styles.scss dist/styles.scss"
- If you wanted to watch your CSS, you might do:
"css:watch" : "node-sass src/styles.scss dist/styles.scss" --watch
- But then you have to sync changes between the two scripts. Instead you could just do:
"css:watch" : "npm run css -- --watch
npm-run-all
is a useful little package that’ll let you run all your npm scripts more succinctly- Say you have three scripts you want to run for linting using different tools
"lint:css" : "csslint --option path/to/file"
"lint:js" : "eslint --option path/to/file"
"lint:html" : "linter --option path/to/file"
- Instead of doing:
"lint" : "npm run lint:css && npm run lint:js && npm run lint:html"
- You can do fancy things like use globbing:
"lint" : "npm-run-all lint:*
- For more info on all the neat little things it can do, checkout npm-run-all
- Use variables in your
package.json
by prefixing the variable name with $
"version" : "2.1.0"
in package.json
can be accessed in your scripts by doing $npm_package_version
- To see variables available to you, do
npm run env
- This only works on Mac/linux. You'd have to install something like
cross-var
in order to have it work cross-platform.
If you haven’t already heard, Jeremy released this “web book” as he calls it. If you read or listen to Jeremy regularly (like I do) most of the content will feel familiar. Nonetheless, I gave it a read and have noted here a few excerpts that stuck out to me when I read it.
The book’s purpose (and the power of ideas over)
You won’t find any code in here to help you build better websites. But you will find ideas and approaches. Ideas are more resilient than code. I’ve tried to combine the most resilient ideas from the history of web design into an approach for building the websites of the future.
Interesting tidbits on why things are the way they are today:
That same interface might use the image of a 3½ inch floppy disk to represent the concept of saving data. The reason why floppy disks wound up being 3½ inches in size is because the disk was designed to fit into a shirt pocket. The icons in our software interfaces are whispering stories to us from the history of clothing and fashion.
Embracing the uncertainty of the web:
While it’s true that when designing with Dreamweaver, what you see is what you get, on the web there is no guarantee that what you see is what everyone else will get. Once again, web designers were encouraged to embrace the illusion of control rather than face the inherent uncertainty of their medium.
History of JavaScript. Love that last line:
The language went through a few name changes. First it was called Mocha. Then it was officially launched as LiveScript. Then the marketing department swept in and renamed it JavaScript, hoping that the name would ride the wave of hype associated with the then‐new Java language. In truth, the languages have little in common. Java is to JavaScript as ham is to hamster.
Imperative and declarative:
That’s a pattern that repeats again and again: a solution is created in an imperative language and if it’s popular enough, it migrates to a declarative language over time. When a feature is available in a declarative language, not only is it easier to write, it’s also more robust
Savage:
Despite JavaScript’s fragile error‐handling model, web designers became more reliant on JavaScript over time. In 2015, NASA relaunched its website as a web app. If you wanted to read the latest news of the agency’s space exploration efforts, you first had to download and execute three megabytes of JavaScript. This content—text and images—could have been delivered in the HTML, but the developers decided to use Ajax to retrieve this data instead. Until all that JavaScript was loaded, parsed, and executed, visitors to the site were left staring at a black background. Perhaps this was intended as a demonstration of the vast lonely emptiness of space. (emphasis added)
The rise of deploying web apps via traditional software tooling (packaged entirely as a js app):
It’s tempting to apply the knowledge and learnings from another medium to the web. But it is more structurally honest to uncover the web’s own unique strengths and weaknesses.
He continues:
On the face of it, the term “web platform” seems harmless. Describing the web as a platform puts it on par with other software environments. Flash was a platform. Android is a platform. iOS is a platform. But the web is not a platform. The whole point of the web is that it is cross‐platform...The web isn’t a platform. It’s a continuum.
In my search for stuff to listen to, I Google’d “the best programming talks” and this was one I stumbled on in a comment thread somewhere out there on the internet.
As I’m not a real computer programmer (but as Pinocchio said, maybe someday) I like to find talks that take a broader perspective and explore principles applicative to any discipline, be it programming, design, or maybe just woodworking. This talk had some of that, thought it was also quite technical at times. Anyhow, I wanted to just make some notes on the tidbits I liked (the slides from the talk can be found here).
Implementation Should Not Impact API
Don’t let implementation details “leak” into API
I think this stood out to me most because it’s something I’ve seen happening a lot at my current job: the technical details of a particular service or API has leaked into a user-facing product and become a mental model for both internal employees and external customers. The problem with this, as the speaker points out, is that it inhibits the freedom to change the implementation in the future because people depend on it.
Names Matter
Around minute 31 he talks about how your API is a little language unto itself. You should be consistent and regular where terminology is largely self-explanatory. If you succeed in naming things consistently and simply your code can end up reading like prose, which is generally an indicator of a well-designed API, i.e.
if (car.speed() > 2 * SPEED_LIGHT) {
speaker.generateAlert('Watch out for cops!');
}
Using Conventions
Around minute 39 he started talking on how you should borrow conventions from existing languages and platforms. Some of his points included:
- Obey standard naming conventions
- Mimic patterns in core APIs and language
- Don’t transliterate APIs
His point, which I think can be generalized to any profession, is that if you build with concepts people are already familiar with, it can lend simplicity to your product. If somebody knows how to use a native convention in a programming language or ecosystem, they’ll know how to use yours. But don't transliterate he says. If you’re building for C, don’t learn everything about C’s way of doing X and mirror that to your tool. Plus what was correct in C may not be correct for your particular implementation. It’s good to step back and ask “what is this trying to do?”.
The work of an experienced software developer... perception vs. reality.
Checkout the image.
One interesting thought: HTML, CSS, and JavaScript are often called “the building blocks of the web”. But perhaps it’s worth considering URLs as the building blocks of the web:
There is another building block for the web, one that is more important than HTML, CSS and JavaScript combined. It all starts with URLs. Those things uniquely identify some piece of information on the web. It should not be that hard or expensive to have a server dump this information into HTML, whatever that information might be; some content, a list of URLs to some more content, you name it. Let’s keep it really simple, just the content, without replicating any of the site’s chrome
It’d be an interesting design exercise to work on building a site purely from URLs with no navigation, i.e. what would typically be your application header would just be one level up: in the browser URL bar.
New industry title: Front-end Systems Engineer. The responsibilities? You spend all your time updating dependencies of a project.
(See Rich Hickey notes about how gem install hairball
is easy: easy to get all that complexity.)
But if you set up a system, you are likely to find your time and effort now being consumed in the care and feeding of the system itself. New problems are created by its very presence. Once set up, it won’t go away, it grows and encroaches. It begins to do strange and wonderful things. Breaks down in ways you never thought possible. It kicks back, gets in the way, and opposes its own proper function. Your own perspective becomes distorted by being in the system. You become anxious and push on it to make it work. Eventually you come to believe that the misbegotten product it so grudgingly delivers is what you really wanted all the time. At that point encroachment has become complete. You have become absorbed. You are now a systems person. (emphasis added)
On another note:
We’re not paid to write code, we’re paid to add value (or reduce cost) to the business. Yet I often see people measuring their worth in code, in systems, in tools—all of the output that’s easy to measure. I see it come at the expense of attending meetings. I see it at the expense of supporting other teams. I see it at the expense of cross-training and personal/professional development. It’s like full-bore coding has become the norm and we’ve given up everything else.
Conclusion:
engineers should understand that they are not defined by their tools but rather the problems they solve and ultimately the value they add. But it’s important to spell out that this goes beyond things like commits, PRs, and other vanity metrics...you are not paid to write code.
The codebase on big sites isn’t impenetrable because developers slavishly followed arbitrary best practices. The codebase is broken because developers don’t talk to each other and don’t make style guides or pattern libraries. And they don’t do those things because the people who hire them force them to work faster instead of better. It starts at the top. (emphasis added)
I added that emphasis because I thought it was a great point. It’s easy to point the finger and say “well it’s not my fault the codebase is impenetrable” but as a professional software engineer it’s your job to communicate the value and importance of a codebase that is comprehendible. Zeldman continues:
Employers who value quality in CSS and markup will insist that their employees communicate, think through long-term implications, and document their work. Employers who see developers and designers as interchangeable commodities will hurry their workers along, resulting in bloated codebases that lead intelligent people to blame best practices instead of work processes.
Great perspective, in my opinion, on organizational structure and effects on code. It’s always talked about how John or Jane Doe developer impacted the codebase, but people outside of the engineering department have an effect too. And that impact is not often talked about (or even perceived).
“When it comes to successful software development only the people matter.”
Code matters, but code is ultimately written by people. I think there were some good questions to ask here about your own software teams:
- How coherently does your team work?
- How well do they communicate with each other?
- What processes are in place that empowers them?
- Do they derive pride from their work?
- How involved do they feel?
Don't love everything in this article, but a few pieces I think are solid. Like this:
When I hear someone say they have 20 years of experience, I wonder if that’s really true or if they merely had 1 year of experience 20 times. I’ve known too many developers that used the same techniques they learned in their first year of employment for the entire span of their career...My point is certainly not that younger developers are smarter. It’s that many programmers let themselves grow stale. And the bigger problem is, after doing the same year’s worth of experience ten times, many programmers forget how to learn. Not only can it be extremely hard to catch up with ten years of technology, it can be next to impossible if you’ve forgotten how to learn.
If you plan on being in the IT field for more than 10 years, you need to be a lifelong learner. I’ve always been a lifelong learner. I’ve learned and developed with numerous programming languages, frameworks, and strategies. As a result, I’ve honed my learning skills.
“I’ve honed my learning skills”, a resume piece indeed. I need someone who will take on whatever is thrown at them. That might mean doing it yourself. Or it might mean finding someone else to do it. Or it might mean recruiting the aid of someone else, who does know what they're doing, and letting them guide you through to the completion of the project.
For example, at my most recent job, a customer promise was made with significant business implications. I was asked to help lead the engineering effort through to completion. All I was given was a repository for a codebase nobody understood which was written in Ruby (a language I don’t know except for the occasional fiddling around with Jekyll). But hey, it was a problem that needed to be solved to add value to the business. So I dived in, recruited others to help, and saw it through.
Idea of figuring out a basic prototype deliverable first. If people don’t get that right away, the extra work you would’ve invested wouldn’t have been enough to make it successful. Instead refine the conceptual idea more until the prototype is enough to convey and convince.
Cut everything else out and share it with the world. Right now. It isn’t easy – you’ll worry it isn’t polished enough. But that’s the point. If your audience doesn’t “get it” in the rough stages, it’s unlikely a few extra hours of work will change their minds. You can always improve it later if the feedback is positive...Some of the more critical comments sting a little, at first. But I’d rather hear them now than ten episodes down the road.
An interesting commentary on a hacker news thread. Front-end is often looked down on, but that looked down upon-ness stems mostly from real CS people who see the web stack (HTML, CSS, JavaScript) as suboptimal to their real stacks.
When people talk bad about JavaScript, HTML, and CSS:
The main reason back-end developers don’t understand front-end is that they expect a well-defined environment.
That’s an advantage of the web! The multiple browsers, the varied environments is what gives you reach, unlike any other platform!
if there's any reason why tech ageism is amazingly dumb it's this one imo (have the guy on your team who's graduated past the fanboy stage make your tool, platform and framework choices, you will save far more than the premium that you have to pay him).
Front-end-ers love shiny stuff, but sometimes it’s employers too: (hackernews comment)
This is because employers are demanding that candidates know the latest and greatest technologies (eg. looking for 5 years of experience in 1 year old technology)...If I need to be experienced with [all this shiny new stuff] to stay employable because just doing my job isn’t enough, then I’m going to learn it.
The real problem he comes up with:
the main reason why front-end is so behind: Real Programmers With A CS Degree don’t do front-end. Front-end is for script kiddies and pixel pushers, it’s not to be taken seriously...Unfortunately Real Programmers don’t know anything about browsers and have no desire to learn. That makes them useless as front-end teachers. This problem, more than anything else, is what perpetuates web development as a hack of hacks.
specialization generally leads to optimization, not innovation
Though this talk was targeted at rails developers, I couldn’t help but see the parallels to front-end javascript developers.
npm install hairball
Is that simple or easy? It's easy — easy to get that complexity. Perhaps it was simple to get a hairball, but now you have to deal with that hairball and inevitably down the line that's complex.
If you want everything to be familiar you’ll never learn anything new.
Generally I dislike articles with headlines like this. But the story in the article illustrates a characteristic of great employees that is sometimes difficult to articulate. The story goes something like this:
A Dad asked his first son, “will you go find out how many cows Cibi has for sale?” The son promptly returned and said “6 cows are for sale.”
The Dad then asked his second son the same question. The second son later returned and said “6 cows [are] for sale. Each cow will cost 2,000 rupees. If we are thinking about buying more than 6 cows, Cibi said he would be willing to reduce the price 100 rupees. Cibi also said they are getting special jersey cows next week if we aren’t in a hurry, it may be good to wait. However, if we need the cows urgently, Cibi said he could deliver the cows tomorrow.”
This short story illustrates a trait an admirable characteristic of great employees. It’s not just about the mandate, it’s about the why behind the mandate. “Why am I being asked to do this?” You can do what your told blindly, but that’s not what your employer needs. Your employer needs you to add value through your own knowledge and experience.
Most people only do what they are asked, doing only the minimum requirement. They need specific instructions on most things they do. Conversely, those who become successful are anxiously engaged in a good cause. They don’t need to be managed in all things...They also influence the direction for how certain ideas and projects go...They reach out to people, ask questions, make recommendations, offer to help, and pitch their ideas.
You’re given a sphere of influence to act within. Act. Don’t simply be acted upon.
Mar Headland has been working as an engineering manager since 1994. Recently on Twitter he talked about how he gets lots of requests for management advice. So, based on the list of questions he’s compiled over the years, he generated the following advice (rolled out in ten tweets):
- Just tell them already. One of the best things you can do as a manager is be completely blunt about what you see. Tell them now.
- Trust is the currency of good management. You cannot be a great manager if the people with whom you work do not trust you.
- Regular one-on-ones are like oil changes; if you skip them, plan to get stranded on the side of the highway at the worst possible time.
- You have to be your team's best ally and biggest challenger. You can't be a great leader by care-taking alone. Push for their best work.
- Repetition feels silly but works wonders. Start each conversation repeating the overall goal and connecting it to the discussion.
- "My team wants to work on ___ because it is more fun for them, is that okay?" No. Never. Quoting @jasonk: "Winning is fun." Go win.
- Clarify the problems your team needs to tackle. Stay all the way away from specifying the solutions. That's their job, not yours.
- You can't know how the company looks from any other seat than your own. Practice with people in other seats to communicate and manage well.
- We talk a lot about diversity and inclusion. Here's my unpopular opinion: you, as a manager, have to force it to happen, or it won't ever.
- Usually when people ask, "Should I fire this person?" the answer is yes. But usually they do it dramatically more brutally than needed.
Here’s my paraphrase:
Sometimes, try taking the things that are against your typical behavior and instead of avoiding them, do them. Practice patience, speak up, quiet down, whatever it is, use the things that make you angry as opportunities to learn how to control your own shortcomings. Be ready for future difficulties. And the best way to do that is with practice.
It’s a really interesting and practical approach to day to day life. When things happen that are uncomfortable or unnatural to you, brace them as opportunities for practice. Rather than practicing your already refined skill of avoiding them.
Hardly a day goes by I don't see a dogmatic statement about the web.
Here’s a short list:
- Never use more than two fonts on a page
- Stop using jQuery
- Every webpage should be responsive
- The cascade is evil
- Never style with IDs
- Never use CSS’s
@import
- Never use CSS’s
*
selector
But nothing is wrong with people spouting off opinions right?
I see ideas that start as dogmatic claims spread. I've heard people regurgitate a dogmatic statement years after I've felt like the industry had moved on. When you happen to agree with them, they feel good. They feel powerful. They feel like something you want to get behind and spread yourself. They don't have that wishy-washy "it depends" feeling that doesn't provide hard and fast answers.
However:
Everyone's situation is different than yours. You can't know everything. There is endless gray area.
Chris argues we should perhaps be a little more verbose in our opinions:
It's certainly wordier to avoid dogma when you're trying to make a point. But it's more honest. It's more clear. It's showing empathy for people out there doing things different. It makes it easier for others to empathize with you.
This was an interesting post on digital typography and, although a lot of it presented quirks and peculiarities I am already familiar with, I wanted to document a few notes I found novel.
Type and Boxes
Digital type lives in boxes. That’s how the software works. But the box is really just a suggestion. Not everything will fit all the time. On the web you don’t have to worry about this unless you start using overflow: hidden
:
by default, browsers allow stuff to stick out, unless the container or one of its parents use overflow: hidden
instead of overflow: visible
. If for whatever reason it’s necessary to apply that restriction, it is important to add horizontal and vertical padding so that text is not clipped.
So how much space do you need to allot a box of type being cut off by overflow: hidden
?
A rough suggestion would be to add horizontal padding that’s ⅓ of the font size.
Type and Sizing
Type sizing is different. Sure, you say font-size: 50px
, but you’ve probably noticed that a defined font size in one typefaces can take up a significantly different amount of space than the same size font of another typeface.
I’ve seen the results of this problem many times on the web. For example, you’ve got a big H1
on your website where you use a third-party web font service to display that big headline in a traditionally unavailable system font. When it loads, it looks like this:
Innovation Is Our
Middle Name
But if that particular font doesn’t load correctly (or perhaps for a split second while the font is actually downloading to the client) you see it displayed in a fallback system font. It’s quiet possible it now takes up a different amount of space and the word wrapping is different:
Innovation Is Our Middle
Name
This could end up being a problem because often you intentionally design your type to fit a certain way, like when wrapping words around a certain part of an image or deliberately putting words on separate lines to add a punch. Differences in type sizing between font families can so easily break your design.
Font sizes are different. Some quite drastically from one another. That’s because font size is a measure of the type’s containing box, not the type itself:
It turns out that when you choose font size, you actually only choose the size of the box the font lives within. What happens within that box is up to the type designer
This can result in even more problems than the ones just outlined above. For example, not all fonts sit on the same baseline, which can cause alignment issues if you use a fallback. It’s definitely something to be cognizant of when designing for the web where your font choices, though we would like to believe otherwise, are never truly bulletproof.
Type simply doesn’t abide by the rules of static, pixel images:
When you spread two images apart, you can rest assured 20 pixels will mean exactly 20 pixels. When it comes to text, those 20 pixels will be accompanied by extra vertical padding at the bottom and top of each text box — and the text will feel like it’s further apart.
This means that often you have to feel your way through type layout and spacing (while being aware of possible font fallbacks). You won’t find a purely mathematical, bullet-proof approach to beautiful typography. As the author goes on to state:
Type is aligned when it feels aligned, not when it actually is aligned.
And this goes deeper in typography. Superscripts aren’t just the same glyphs shrunk down. Bold characters aren’t just the same letters with a stroke or two on them. Italic words aren’t just the normal versions slanted ten degrees. Type designers optimize these variations with subtle differences. They are all new shapes, redrawn from the “regular” ones so that they feel and appear optically correct.
At the end of the day, lots of these typographic guidelines are here because that’s what we’ve grown used to. And because we’ve grown used to it, it’s best to follow those norms because it sets up expectations between you and the reader.
A lot of [this] might seem arbitrary, but that’s typography for you, too: some of it is not things that are objectively better, just things we’re gotten used to over the last few centuries.
If you’re going to break those norms, do it for a reason. Design intentionally.
As always, insightful progressive enhancement thoughts around service workers vs. web components:
The next question we usually ask when we’re evaluating a technology is “how well does it work?” Personally, I think it’s just as important to ask “how well does it fail?”
Service workers work well and fail well. If a browser supports service workers, the user gets all the benefits. If a browser doesn’t support service workers, the user get the same experience they would have always had. Web components (will) work well, but fail badly. If a browser supports web components, the user gets the experience that the developer has crafted using these new technologies. If a browser doesn’t support web components, the user gets…probably nothing. It depends on how the web components have been designed.
It’s so much easier to get excited about implementing service workers. You’ve literally got nothing to lose and everything to gain.
Why it’s ok to be failing while trying to find your way:
To get where we need to go, we have to do it wrong first...If we can continue to work together and consciously balance these dual impulses—pushing the boundaries of the web while keeping it open and accessible to everyone—we’ll know we’re on the right track, even if it’s sometimes a circuitous or befuddling one. Even if sometimes it’s kind of bad. Because that’s the only way I know to get to good.
Love that last bit: the only way to get good is to be bad.
This article is more interesting views on progressive enhancement, though there’s not a lot of novelty here. Progressive enhancement, though arguably not for everyone, seems ideal for a government website.
The more time I spend developing for the web the more I like the concept of progressive enhancement, if nothing else than for its 1) reach and 2) longevity. Pure javascript single-page-apps these days can be so brittle and tenuous in terms of longevity.
“progressive enhancement is about resilience as much as it is about inclusiveness.” ... Building in resilience is also known as defensive design: a system shouldn’t break wholly if a single part of it fails or isn’t supported.
We have a mandate to provide digital services to everyone in the UK and many beyond. Many users access services in different ways to the configuration tested by developers. If a person visits GOV.UK we want them to be able to complete their service or access the information they need, regardless of whether we’ve tested their configuration or not.
I’ve been creating web stuff for over 10 years, and i’ve been working with Git for probably the last 4 or 5 of those. So I consider myself relatively rote in basic Git commands like commit
, push
, pull
, etc. However, I’ve never felt like I really understood Git, so I grabbed this book thinking it might help. One of the more lasting impressions this book left me was a concept introduced in the preface of the book by Mandy Brown:
Knowing when and how to commit a change is more than just a means of updating code—it’s also a practice for communicating and sharing work. ... By all means, devour the following chapters in order to t understand how to manage merge conflicts and interpret a log. But don’t forget that Git’s ultimate audience isn’t machines—it’s humans.
This idea of looking at Git as a communication tool is reinforced throughout the entire book. Git can be seen as a way to tell a story about what’s happening to your codebase in a way that’s much “closer to the metal” than other communication tools Slack or email. It’s made me rethink my actions in Git, like how I craft commit messages, how I tag branches, and how I propose and merge pull requests. These can be viewed as more than commands in the terminal but as methods of communicating to humans in the story book of a project, Git becomes a powerful tool of communication over the life arc of your codebase.
For example, when I was first introduced to Git I thought it rather laborious. I liked the idea of saving changes often but having to write a message for every save? That seemed a bit much. Which is why my commit history looked something like:
Initial work on sidebar widget
more changes
still more changes
last change
ok this is the last change
Now after reading this book, I want to be more thoughtful in my approach to Git and commit messages. I want to have the change of mindset the author himself had and describes in the book:
I came to appreciate the benefits of having every signification version of my projects stored, annotated, and neatly organized ... [it helped me] to think of commits as significant changes, as opposed to the hundreds of little changes I might save in a given hour. The extra steps involved in committing ... have ultimately helped me develop a more thoughtful and judicious way of working.
I think if you work in Git long enough, you’ll begin to appreciate how all those little commits and actions stack up over the course of a project. To have a couple year-old project that is beautifully documented in Git is something you’ll never get through shortcuts. You’ll only get it through disciplined, thoughtful work over a long span of time. Chris Beams talks about this in his article “How to Write a Git Commit Message”. He begins by showing two examples of commit histories, one earlier in his project when he wasn’t caring about commit messages and the other later in the project when he began caring:
The former varies wildly in length and form; the latter is concise and consistent. The former is what happens by default; the latter never happens by accident.
If you’ve worked in software, you know the truth of those words: “the former is what happens by default, the latter never happens by accident”. The author of “Git for Humans” touches on this same point in his book:
Every commit you add to your repository contributes to the historical record of your project, so it’s a good idea to make the best, most meaningful commits you can.
That’s what I enjoyed most about this book. I didn’t come away with a lot of technical tips and tricks on using Git (though I’m sure as a newbie to Git you could gain precisely that). Rather, I came away with a broader view of Git as a tool and a process. Git can be about version control, yes. But it can also be about regular people making changes, evolving a code base, crafting a history, and doing it all together. Git can be a story, the story of your time with other people on a project, a biography of yourself, your team, and your product.
Here’s the thing about this book: it covers a lot of information that, if you’re current on the latest trends in web design, isn’t new. Don’t get me wrong, the author covers it in a clear, concise manner. If I was a beginner to the field, I would find this book very useful. But as a practitioner, I found the book mostly reviewing things I’m already doing every day (or am at least aware of in some fashion). I would still think the book is well written and would recommend it to anyone who is new to the landscape of responsive web design and wondering how all this responsive stuff is actually accomplished. But for a practitioner there won’t be a lot of new “tips and tricks”. With that said, here are a few things that stood out to me that I enjoyed:
Ajax-Include Pattern
Ethan covers the topic of conditionally loading menus and other content via javascript (as opposed to serving everything on page load and then just showing/hiding parts via CSS). One method he gives for accomplishing this, which I had not seen but found to be quite clever and semantic, is the Filament Group's Ajax-include pattern. You can simply reference the HTML want you loaded in an HTML5 data attribute, like so:
<li data-append="/politics/latest-stories.html">
<a href="/politics">Politics</a>
</li>
I thought this a great example of progressive enhancement. If javascript is available on the client, it fetches that document fragment and appends it in context.
For what it’s worth, the plugin appears to have a good API which allows additional options such as conditionally loading the content based on a media query, i.e. data-media="(min-width:500px)"
.
A/B Testing
This is the reason we A/B test in the first place, because the findings of others...are all unproven until they've been tested against our customers, on our platform.
That’s Ethan quoting Michel Ferreria talking about how A/B tests results can be useful but you've got to remember to weigh them against your own user base. So often it feels like A/B test results from another product are improperly used to justify decisions in your own product. Obviously people on your team are not trying sabotage your product. We all just have biases and try to use data to back up our bias. We make the results fit our desired outcome.
I think it’s good to remember (and point out where necessary) that A/B tests from others are performed under conditions unique to them, conditions that often will not mirror those of your own product. You wouldn’t take the results of a research project in Antartica which prove it’s cold outside to mean everyone in desert in Phoenix should buy a coat. They are different circumstances, different conditions. By all means, weigh the data from an A/B test against your own user base and product and make use of the work of others. Just remember data isn’t fact just because its data.
The <img>
tag and max-width: 100%
One of the supposed downsides of simply setting max-width: 100%
on all <img>
elements is that some images don’t scale down nicely. For example, let’s say you have a 1200x1200 pixel infographic. At full resolution the type is set at 16 pixels. That means when the infographic’s <img>
element is responsively scaled down on a mobile device to a width of say 300x300 pixels, the text on that graphic is going to be completely illegible.
Now, from a technological standpoint, some people may argue that what you need are tools, tools, tools! Design the infographic in three different sizes, crop it for seven different media queries, and output fifteen different versions of one image in order to account for sizing, pixel density, etc. Then serve all those different images using the <picture>
element with a javascript polyfill (or some other cutting edge feature).
Now you could well go and do that. And there are cases when I’m sure that is proper. But I’d like to also suggest another solution: just wrap the <img>
element in an <a>
tag. That’s it. If the image gets resized to a small viewport, that’s ok. If the user needs to see the full-sized image, they can simply click on it and then use their native hardware’s pan/zoom tools. Sure some might argue that this solution isn’t as optimal as the other scenario, but it’s just as accessible (one could argue possibly more-so). A simple link can be quite effective because hey, that’s how the web was designed to work: link to resources on computers. I’ll be a web page designed today with <a><img></a>
will still work in twenty years on a web-accessible device. While a hundreds-of-lines polyfill in javascript designed to fetch and serve one of twenty different images may not.
Mobile Desktop First
One of the reasons I often start my designs at 960px and then design more narrow screens as I go is aptly described by the guys who did the Boston Globe website, as quoted by Ethan in this book:
Ours designs began at 960px, arguably the most complicated breakpoint, with several columns of content. Maybe this just came naturally after years of designing for that width. But I think it’s more than that. It’s easier to design with more screen real-estate — you see more at one time, you have a more nuanced hierarchy ... So starting at 960, we designed downward. Every decision informed the one before it and the one after; we flipped back and forth between break points a lot. As the Mobile First mantra suggest, designing for mobile was most instructive because it forced us to decide what was most important. And since we refused to hide content between break points, the mobile view could send us flying back up to the top level to remove complexity. The process felt a bit like sculpting.
I think that’s an apt justification for how I’ve arrived at my current process for responsive design. Start at 960px and get a good grasp for content relationships, patterns, and hierarchy. As you transitions those responsive modules down to mobile break points, things often break and you discover better solutions for content relationship, UI patterns, and element hierarchy. This sends you back up to 960 pixels where, often, you end up solving problems that were slightly nagging you in the first place. That moving up and down the viewport spectrum and seeing how things flow down to smaller break points helps me arrive at the designs I want without starting “mobile first”.
Final Points
In writing about the need for effective language around responsive design, Ethan writes:
We need a design language that’s as nimble and modular as our layout systems are becoming.
This describes one of the points I felt Ethan was trying to hammer home throughout this book. He wasn’t just writing a tutorial-style book on “how-to responsive design”. To me, at a higher level this book was a suggestion to change the way we think and talk about responsive design rather than the way we implement it. Talking about process and methodology is always harder than technical implementation, often because there is no language for doing it. We haven’t really come up with a really concise vocabulary for responsive web design yet, at least that’s what Ethan suggests in this book. That, more than any particular technical tip or trick, will help propel us to more effective web designs. As the author Jan Swafford once stated, “To discover new means of expression is to discover new territories of the human”.
The law of leaky abstractions means that whenever somebody comes up with a wizzy new code-generation tool that is supposed to make us all ever-so-efficient, you hear a lot of people saying “learn how to do it manually first, then use the wizzy tool to save time.” Code generation tools which pretend to abstract out something, like all abstractions, leak, and the only way to deal with the leaks competently is to learn about how the abstractions work and what they are abstracting. So the abstractions save us time working, but they don't save us time learning.
This is the problem computers present to us in general, not just in coding. They abstract tasks for us. Even something as simple as a spell checker. If you grow up never really learning to spell but always relying on a spell checker (or autocorrect), you'll never be able to write without the assistance of a computer. The computer has become a crutch. As the author stated, abstractions and automation can help you save time doing but they cannot help you save time learning.
An interesting insight into the idea of “holistic management”, where you look at how decision made for and in-behalf of your team will ripple through and affect your larger organization:
Just like we ask designers, engineers and PMs to consider the entire system when designing and building their slice of it, we should also ask the same of ourselves as managers.
Interesting look at performance difference between mixin
and extend
in sass. I found this particular point refreshing:
Y’see, when people talk about the performance of mixins, they’re usually thinking about filesize on the filesystem. But because we have gzip enabled (you do have gzip enabled, right?), we should be thinking about filesize over the network.
I often forget this fact. As the Harry Roberts, the author, goes on to show in the article: filesize on the filesystem is one thing and filesize gzipped can often be something else entirely. In his example, File1 was 150% larger than File2 upon compilation, but after gzipping, File1 as 33% smaller than File2.
90 percent of design is typography. And the other 90 percent is whitespace. Style is the servant of brand and content. Style without purpose is noise.
Rediscovering this with the renewed interest in speed, basic page structure semantics, JavaScript fatigue, etc.
One illustration or original photo beats 100 stock images.
With the ubiquity of people being online, they see everything across the Internet (especially the younger generation) so if you’re not unique, they’ll notice. If you’re bland, they’ll notice
Nobody waits. Speed is to today’s design what ornament was to yesterday’s.
This is an interesting observation. I feel like it puts into words this nagging feeling I’ve had the last few months: speed is design. On the web, speed is just as much a part of the design as the grid or typography. In many cases, I think some people would prefer the speed and simplicity of vanilla HTML markup over giant JavaScript apps when all they need to see or read is a few dozen lines of text.
By no means do I consider myself a public speaker. However, in the limited experience I have speaking to groups, these tips seem relevant to public speaking.
First: do you ever wonder why most talks begin with a joke or story? Here’s why:
I’m finding that it’s very important to just get up there and talk a little bit, make some dumb jokes, let people get used to you existing.
Second: it’s generally a good idea to get rid of slides and notes and just have statements that you can respond to, then it feels conversational as opposed to dictated:
I've thrown away most of the slides with bullets, and I’ve thrown away all of my slide notes. Notes are terrible and half the time you can’t see them anyway. Then, what I try to do is make every slide a little statement, and respond to it...By throwing together statistics and pictures and quotes, it looks like I’m giving a talk, but what’s really happening is that I’m having a conversation with myself. The slides are saying things to me and I’m responding.
Apparently, on-demand valet parking services are springing up in Silicon Valley as a thing — it’s like the Uber of parking. You drive your car, pull out the app, and summon a valet who will park your car in a secure lot and retrieve it whenever you want (for a small fee of course).
The author of the article makes this observation, which I can’t help but agree with wholeheartedly:
In our oversaturated world of on-demand anything, the emergence of insta-valet services is, sadly, not shocking. We want everything to be cheap and easy … but at what point does our obsession with convenience go from maximizing efficiency to optimizing laziness?
That last line so perfectly sums up so many of the venture-backed startups I’ve seen: “at what point does our obsession with convenience go from maximizing efficiency to optimizing laziness?”. Why does it seem like almost every new consumer app/service merely panders to our indolence?
An interesting look at the tech culture differences between Brighton, England, and San Fran, California, centered around the person of Jeremy Keith:
[Keith said about Brighton] “It’s not the classic startup obsession with a quick-buck IPO tunnel vision. It’s more about long-term happiness. Nobody’s going to work too hard if working too hard means giving up fun, giving up life. Or else what is all this technology for?”
An interesting look at the root meaning of the word “art” and its relationship to design:
We know that where we perceive no patters of relationship, no design, we discover no meaning … The reason apparently unrelated things become interesting when we start fitting them together … is that the mind’s characteristic employment is the discovery of meaning, the discovery of design … The search for design, indeed, underlies all arts and all sciences … the root meaning of the word art is, significantly enough, ‘to join, to fit together’ — John Kouwenhoven, as quoted in “A Designer’s Art” by Paul Rand (xiii)
The essence of graphic design:
Graphic design is essentially about visual relationships–providing meaning to a mass of unrelated needs, ideas, words, and pictures. It is the designer’s job to select and fit this material together–and make it interesting.
On the process graphic design, and why respect for the individual and his/her process is absolutely necessary (LOVE THIS!!):
The separation of form and function, of concept and execution, is not likely to produce objects of aesthetic value … any system that sees aesthetics as irrelevant, that separates the artist from his product, that fragments the work of the individual, or creates by committee, or makes mincemeat of the creative process will in the long run diminish not only the product but the maker as well.
A statement on the process of design, the first part always being that the designer must break down before he can build up:
Design starts with three classes of material:
- The given - product, copy, slogan, logotype, format, media production process.
- The formal - space, contrast, proportion, harmony, rhythm, repetition, line, mass, shape, color, weight, volume, value, texture.
- The psychological - visual perception, optical illusion problems, the spectator's instincts, intuitions, and emotions (as well as the designer's own needs).
As the material furnished him is often inadequate, vague, uninteresting or otherwise unsuitable for visual interpretation, the designer's task is to restate the problem. This may involve discarding or revising much of the given material. By analysis (breaking down of the complex material into its simplest components - the how, why, when and where) the designer is able to begin to state the problem
Lastly, the job of artist (and by extension a good graphic designer) is to call attention to the ordinary, to make people stop and reconsider what they believe they already understand:
[designers should practice] the fine art of exhibiting the obvious in [the] unexpected...The problem of the artist is to defamiliarize the ordinary.
A great little piece of writing touching on the art of human relationships. It’s based on an experience Jeremy Keith had trying to persuade others about the importance accessibility:
If I really want to change someone’s mind, then I need to make the effort to first understand their mind. That’s going to be far more productive than declaring that my own mind is made up. After all, if I show no willingness to consider alternative viewpoints, why should they?
A very on-point lecture on the limitations and danger of the our digital mentality “collect everything and maybe we will use it later”.
One example the speaker touches on is the pharmaceuticals industry and how, because of the “big data” philosophy they bought into, they are now at a point of diminishing financial returns in new drug development:
This has been a bitter pill to swallow for the pharmacological industry. They bought in to the idea of big data very early on. The growing fear is that the data-driven approach is inherently a poor fit for life science. In the world of computers, we learn to avoid certain kinds of complexity, because they make our systems impossible to reason about.
Note that the speaker suggests that the diminishing returns result from the fact that computers need unambiguous rules in order to make sense of things. Thus, in order to create data models and make sense of the world, programmers have to throw out “certain kinds of complexity” which are inherently and naturally found in the realm of biological science. As the speaker states later on, “Nature is full of self-modifying, interlocking systems, with interdependent variables you can't isolate.”
Ultimately, what we are dealing with in our computer systems are humans and humans adapt. That means when you create a data model around a person, the model inevitably goes out the window because a person adapts and changes, not just naturally over time, but they also react and change according to the model that is enforced on them.
An example of how humans adapt to numerical requirements, as drawn from this transcript, is found in the anecdotal story of a nail factory. Once their was a nail factory. In the first year of their five year plan, the nail factory’s management evaluated employees by how many mails they could produce. As such, employees produced hundreds of millions of uselessly tiny nails. Seeing their mistakes, management changed their productions goals to measure nail weight rather than nail quantity. As a result, employees produced a single giant nail.
Perhaps this story seems unreal, but the speaker provides a less fictitious example of how humans adapt to the systems imposed on them and how that ultimately renders the collected data useless:
[An] example is electronic logging devices on trucks. These are intended to limit the hours people drive, but what do you do if you're caught ten miles from a motel? The device logs only once a minute, so if you accelerate to 45 mph, and then make sure to slow down under the 10 mph threshold right at the minute mark, you can go as far as you want. So we have these tired truckers staring at their phones, bunny-hopping down the freeway late at night. Of course there's an obvious technical countermeasure. You can start measuring once a second. Notice what you're doing, though. Now you're in an adversarial arms race with another human being that has nothing to do with measurement. It's become an issue of control, agency and power. You thought observing the driver’s behavior would get you closer to reality, but instead you've put another layer between you and what's really going on. These kinds of arms races are a symptom of data disease. We've seen them reach the point of absurdity in the online advertising industry, which unfortunately is also the economic cornerstone of the web. Advertisers have built a huge surveillance apparatus in the dream of perfect knowledge, only to find themselves in a hall of mirrors, where they can't tell who is real and who is fake.
Love this assertion the speaker makes: very often data is just a mirror, it reflects whatever assumptions we bring to it.
The belief in Big Data turns out to be true, although in an unexpected way. If you collect enough data, you really can find anything you want.
See the image for context.
This is a graph from Tyler Vingen's lovely website, which been making the rounds lately. This one shows the relationship between suicides by hanging and the number of lawyers in North Carolina. There are lots of other examples to choose from. There's a 0.993 correlation here. You could publish it in an academic journal! Perhaps that process could be automated. You can even imagine stories that could account for that bump in the middle of the graph. Maybe there was a rope shortage for a few weeks? 'Big data' has this intoxicating effect. We start collecting it out of fear, but then it seduces us into thinking that it will give us power. In the end, it's just a mirror, reflecting whatever assumptions we approach it with. But collecting it drives this dynamic of relentless surveillance.
An overview of how Instagram has been engineered over the last five years. A few points stuck out to me.
Choose simple solutions and do not over engineer in order to future proof:
Our mantra, “Do the simple thing first” ... Since there were only two of us, we had to determine the fastest, simplest fix each time we faced a new challenge. If we had tried to future-proof everything we did, we might have been paralyzed by inaction. By determining the most important problems to solve, and choosing the simplest solution, we were able to support our exponential growth.
It takes an incredible amount of focus to do this, but
we often say “do fewer things better” inside Instagram
However, with that said, the author does state that this is not the answer all the time for everyone:
Doing the simple thing first doesn’t mean your solution will work forever. We’ve learned to be open to evolving our product...
Just a cool video about craftsmanship. I love how the philosophy behind the craftsmanship is what drives the work forward, not the data behind the product.
As far as what I hope the audience thinks of the sound, I would hope that they think the sound was all shot on the days they shot the movie and that it’s all there. — Richard King, Supervising Sound Editor & Sound Designer Interstellar
I found it interesting how the ultimate goal of the sound team was to have the audience not even notice their work. After all the extremes they went to — the airplane graveyard, the ice shoes, the sand blaster — they wanted their work to go unnoticed by the audience and instead have them simply assume it was all a product of the original film shoot.
I find such an interesting parallel to this in visual design: good, simple design is the obvious choice. So obvious, in fact, that people don’t even notice it. They just naturally assume “how could it be any other way?”
Reminds me of this quote, from the book The Inmates are Running the Asylum (which, if you haven’t read it, is great):
If, as a designer, you do something really, fundamentally, blockbuster correct, everybody looks at it and says, “Of course! What other way would there be?” This is true even if the client has been staring, empty-handed and idea0-free, at the problem for months or even years without a clue about solving it. It’s also true even if our solution generates millions of dollars for the company. most really breakthrough conceptual advances are opaque in foresight and transparent in hindsight. It is incredibly hard to see breakthroughs in design. you can be trained and prepared, spend hours studying the problem, and still not see the answer. Then someone else comes along and points out a key insight, and the vision clicks into place with tentacular obviousness of the wheel. If you shout the solution from the rooftops, others will say, “of course the wheel is round! What other shape could it possibly be?” This makes it frustratingly hard to show off good design work. — pg. 200
This is inline with what Alan Dye, Vice President of User Interface Design at Apple, said about Apple’s design goals:
Inevitable is the word we use a lot. We want the way you use our products to feel inevitable.
The goal is to make it seem as if the designers at Apple can’t even control the form and function of their products because the end goal is so natural and logical, i.e. inevitable.
Hay Miyazaki, a famous Japanese animation director who created classics such as Spirited Away and Princess Mononoke, described an aspect of his creative process that pinpoints how many of us feel in technology:
Making films is all about—as soon as you’re finished—continually regretting what you’ve done. When we look at films we’ve made, all we can see are the flaws; we can’t even watch them in a normal way. I never feel like watching my own films again. So unless I start working on a new one, I’ll never be free from the curse of the last one. I’m serious. Unless I start working on the next film, the last one will be a drag on me for another two or three years.
This was written back when iOS 7 was first introduced to the world. I read it then and made this note. In the years since, I’ve always done a “spring cleaning” of my notes and this one always persisted. I think Frank captures perfectly a description of my job the last three years.
Every time I read this quote, it feels more and more relevant. Likely because “Experience gives a person the eyes to imagine their small choices in aggregate.”
Part of being a good designer is having a hatred for inconsistencies, so I take the interface’s unevenness to mean a hurried timeline, rather than an unawareness of the inconsistencies. Working on multiple screens, apps, and userflows means that certain aspects of the whole system will fall out of sync with each other as the later parts’ lessons override previous choices. The last step of most design processes is to take the lessons learned along the way and apply those best practices to the niggling incongruencies that have inevitably sprung up. This last step usually gets cut under tight deadlines, because the work is technically “done,” but just not “right.” Unfortunately, this kind of consistency is usually seen as a design indulgence that can be postponed. “We’ll iterate,” designers are usually told, but everyone knows you loose a bit of the luster of a tight first impression.
Jackson Browne, an American singer-songwriter who’s been inducted into the Rock-n-Roll hall of fame, did an interview around his newest album Standing in the Breach and revealed a few nuggets about the creative process of making his album which I found relevant to any kind of creative process in general.
One of the first questions the host asked was if he had a personal favorite track on his record. He responded:
No I like them all. Each one of them, at one time or another, has been my favorite song. That’s what it takes to finish them, they have to be something I’m really interested in...[my] songs aren’t always finished when I start recording them. I may just rewrite a verse, or I may actually take something out. It’s a process of exploring. I want to know what the song is capable of doing musically before I finish the subject in terms of lyrics. Nothing’s worse that writing too many verses and having to throw some out. You know, if you find out you don’t really want to hear two verses before the chorus but you’ve already setup the narrative. That’s happened a couple times, where I songs just on the acoustic guitar but when I started to play it with a band I realized “I don’t want to hear another verse, I want to [go right into the chorus]”. I’m always rewriting but I don’t like to throw things away.
That last sentence of his is great: always rewriting, refining, simplifying, making better, making more concise yet impactful and deep. Great stuff.
Later in the interview he talks about his relationship with his album producer how well they work together and compliment one another:
We’re a perfect match because he’s a good engineer but he’s got infinite patience. I’m a neophite, in a recording I’m not technical at all, so I need someone to sit there with me while I think about what I want to think about and who doesn’t engage me with what he wants to do, but just does what he wants to do. Some engineers are ambitious and want to talk about what they want to do, “I’m going to do this, I’m going to do that, can we do this?”. I need somebody that’s much more...patient. Someone who’s almost passive and who will allow me to move things around and turn the balances upside down. And things may remain out of balance for long periods of time and then [i’ll bring them back]...In a funny way, we’re bystanders to each others’ work.
I love this idea he expresses about allowing yourself (and having your producer, co-worker, boss, etc. allow you) to put things out of order to discover possibilities so when you arrive at the final product, you know there’s no other way the thing could be because you’ve explored all possible permutations.
In this same vein of continually exploring possibilities, the host asks if it’s hard for him to release songs because, at that point, the song would officially be considered “finished”:
It’s a bit of an acquired skill to know when a song is done because it’s very easy to keep going and keep adding things because it’s interesting, it’s fun. In a way you’re never done. The song is going to continue to grow after the album too, you just have to know how far you can go with this particular recording.
At the end of the interview, the host opens the discussion to questions from the audience. A guest in the crowd asked him how he remembers and tracks his half-baked ideas. He answers by talking about how he keeps track of snippets on his iPhone which helps him a lot because sometimes he remembers the idea of a song better than it actually was, like “oh yeah I was working on this thing that was really great” and then he’ll look up the snippet on his phone and realize “oh wow, that actually wasn’t very good” but that bad idea can spur other good ideas:
Most of my ideas come from mistakes that interest me...I could disappear into my music room with some of my recordings and make a bunch of songs out of the boxes and boxes of my recordings because each of them represents a moment when I thought I was doing something of value or interesting.
Love that first sentence: “most of my ideas come from mistakes that interest me”. That’s why you shouldn’t be afraid of anything you’ve done in the past. It’s all experiential fodder for good things in the future.
What’s so interesting to me about this article is that it was written in 1969. It’s one of the timeless articles where you think, “man how did the author so accurately foresee the future?”
This passage encapsulates how I increasingly feel seeing the results of tech announcement events, where companies tout their innovative, revolutionary products which will solve all your problems. But underneath, these solutions are merely more technology presented as the solution to the problems caused by our current technology.
The recent history of technology has consisted largely of a desperate effort to remedy situations caused by previous over-application of technology ... Every advanced country is over-technologized; past a certain point, the quality of life diminishes with new “improvements.” Yet no country is rightly technologized, making efficient use of available techniques. There are ingenious devices for unimportant functions, stressful mazes for essential functions, and drastic dislocation when anything goes wrong, which happens with increasing frequency. To add to the complexity, the mass of people tend to become incompetent and dependent on repairmen—indeed, unrepairability except by experts has become a desideratum of industrial design.
“Technology is causing problems, so let’s throw more technology at the problem.” I believe this quite acutely applies to our current trend in technological innovation. It’s this idea we are wrestling of treating the symptoms rather than finding a cure:
It is discouraging to see the concern about beautifying a highway and banning billboards, and about the cosmetic appearance of the cars, when there is no regard for the ugliness of bumper-to-bumper traffic and the suffering of the drivers. Or the concern for preserving an historical landmark while the neighborhood is torn up and the city has no shape. Without moral philosophy, people have nothing but sentiments.
The author also touches on technological automation (emphasis added):
In automating there is an analogous dilemma of how to cope with masses of people and get economies of scale, without losing the individual at great consequent human and economic cost. A question of immense importance for the immediate future is, Which functions should be automated or organized to use business machines, and which should not? This question also is not getting asked, and the present disposition is that the sky is the limit for extraction, refining, manufacturing, processing, packaging, transportation, clerical work, ticketing, transactions, information retrieval, recruitment, middle management, evaluation, diagnosis, instruction, and even research and invention. Whether the machines can do all these kinds of jobs and more is partly an empirical question, but it also partly depends on what is meant by doing a job. Very often, e.g., in college admissions, machines are acquired for putative economies (which do not eventuate); but the true reason is that an overgrown and overcentralized organization cannot be administered without them. The technology conceals the essential trouble, e.g., that there is no community of scholars and students are treated like things. The function is badly performed, and finally the system breaks down anyway. I doubt that enterprises in which interpersonal relations are important are suited to much programming.
But worse, what can happen is that the real function of the enterprise is subtly altered so that it is suitable for the mechanical system. (E.g., “information retrieval” is taken as an adequate replacement for critical scholarship.) Incommensurable factors, individual differences, the local context, the weighting of evidence are quietly overlooked though they may be of the essence. The system, with its subtly transformed purposes, seems to run very smoothly; it is productive, and it is more and more out of line with the nature of things and the real problems. Meantime it is geared in with other enterprises of society e.g., major public policy may depend on welfare or unemployment statistics which, as they are tabulated, are blind to the actual lives of poor families. In such a case, the particular system may not break down, the whole society may explode.
In our haste to see what computers are capable of, we so often misconstrue how well they are actually doing the job we’ve handed off to them:
It is so astonishing that the robot can do the job at all or seem to do it, that it is easy to blink at the fact that he is doing it badly or isn’t really doing quite that job.
When a task is done by a computer rather than a human, its significance and holistic effect are not the same, though we often convince ourselves otherwise.
Gary Larsen, creator of The Far Side comic strip, in the preface to his complete comic book anthology:
It's been almost seven years since I hung up my eraser. (For the record, an eraser was the most essential tool I owned.)
Earlier in the introduction, his newspaper editor talked about how fastidious Gary was in writing the captions for his comic strips. He made this observation, which for anyone familiar with The Far Side rings true:
good writing can save bad art, but good art can never save bad writing.
An interesting read on the state of the web and how, just maybe, we should ponder slowing down for one second to consider the direction we’re headed in and contrast that with where and what we want the web to be. Of course to suggest “slowing down” is technological blasphemy. So the author correctly prefaces his article with “Fair warning. You’re going to hate this one.”
Here are a few passages I enjoyed, in no particular order:
Recently I’ve been having serious doubts about the whole push the web forward thing. Why should we push the web forward? And forward to what, exactly? Do we want the web to be at whatever we push it forward to? You never hear those questions.
Pushing the web forward currently means cramming in more copies of native functionality at breakneck speed — interesting stuff, mind you, but there’s just too much of it.
Native apps will always be much better at native than a browser. Instead, we should focus on the web’s strengths: simplicity, URLs and reach.
But why do web developers want navigation transitions? In order to emulate native apps, of course. To me, that’s not good enough.
We’re pushing the web forward to emulate native more and more, but we can’t out-native native. We are weighed down by the millstone of an ever-expanding set of tools that polyfill everything we don’t understand — and that’s most of a browser’s features nowadays. This is not the future that I want to push the web forward to.
A coworker showed me this resource around computer jargon — a hacker’s lexicon if you will (apparently it’s the online version of The New Hacker’s Dictionary).There are some funny terms in there. If you work in technology, you’ll probably enjoy these.
Here are a few I enjoyed:
ambimouseterous:
Able to use a mouse with either hand.
disemvowel :
To partially obscure a potentially provocative word by substituting splat characters () for some of its letters (usually, but not always, the vowels). The purpose is not to make the word unrecognizable but to make it a mention rather than a use, so that no flamewar ensues. [Example: “gn cntrl”]
job security:
When some piece of code is written in a particularly obscure fashion, and no good reason (such as time or space optimization) can be discovered, it is often said that the programmer was attempting to increase his job security (i.e., by making himself indispensable for maintenance). This sour joke seldom has to be said in full; if two hackers are looking over some code together and one points at a section and says “job security”, the other one may just nod.
goat file:
A sacrificial file used to test a computer virus, i.e. a dummy executable that carries a sample of the virus, isolated so it can be studied. Not common among hackers, since the Unix systems most use basically don't get viruses.
Ninety-nine rule:
“The first 90% of the code accounts for the first 90% of the development time. The remaining 10% of the code accounts for the other 90% of the development time.” ... Other maxims in the same vein include the law attributed to the early British computer scientist Douglas Hartree: “The time from now until the completion of the project tends to become constant.”
Because, apparently, there’s so little to talk about anymore, it’s been announced that a computer has written lyrics that rival rap legend Eminem. As such, some have even claimed “rappers might soon lose their jobs to robots”. But as Nicholas Carr points out, that’s a little premature:
Our assumptions and expectations about artificial intelligence have gotten ahead of the reality, in a way that is distorting our view not only of the future but of the very real accomplishments being made in the AI and robotics fields.
Personally, I find this especially true for computer illiterate people. My dad constantly sees “news” headlines making outlandish claims for AI and therefore has this sense that the robot rapture will soon be upon us.
As someone who works in tech, I find it laughable, borderline ridiculous that as soon as computers do the tiniest little thing “we jump to the conclusion that computers are mastering wordplay and, by implication, encroaching on the human facility for creativity and improvisation”.
Fairly recently, Paul Ford wrote a piece called What is Code? where he tried to explain programming. This is another piece that takes a different (shall we say realistic) view of programming. If Ford’s article is seeing programming as a cup half-full, this article is seeing programming as a cup half-empty. Both are true, it's just a matter of view point (or current mood).
Firstly programming is hard. Even if you know lots of programming languages that doesn't mean you will understand an application written in any particular language you know.
The first few weeks of any job are just figuring out how a program works even if you’re familiar with every single language, framework, and standard that's involved...
The average life of a programmer on the web, remembering that a programmer is such a wide-reaching term (emphasis added):
Say you're an average web developer. You're familiar with a dozen programming languages, tons of helpful libraries, standards, protocols, what have you. You still have to learn more at the rate of about one a week, and remember to check the hundreds of things you know to see if they’ve been updated or broken and make sure they all still work together and that nobody fixed the bug in one of them that you exploited to do something you thought was really clever one weekend... You're all up to date, so that’s cool, then everything breaks....You are an expert in all these technologies, and that's a good thing, because that expertise let you spend only six hours figuring out what went wrong, as opposed to losing your job...And that’s just in your own chosen field, which represents such a tiny fraction of all the things there are to know in computer science you might as well never have learned anything at all. Not a single living person knows how everything in your five-year-old MacBook actually works.
The internet is really just being held together by duct tape and glue:
Websites that are glorified shopping carts with maybe three dynamic pages are maintained by teams of people around the clock, because the truth is everything is breaking all the time, everywhere, for everyone. Right now someone who works for Facebook is getting tens of thousands of error messages and frantically trying to find the problem before the whole charade collapses. There’s a team at a Google office that hasn’t slept in three days. Somewhere there’s a database programmer surrounded by empty Mountain Dew bottles whose husband thinks she’s dead. And if these people stop, the world burns. Most people don’t even know what sysadmins do, but trust me, if they all took a lunch break at the same time they wouldn’t make it to the deli before you ran out of bullets protecting your canned goods from roving bands of mutants … You can't restart the internet. Trillions of dollars depend on a rickety cobweb of unofficial agreements and “good enough for now” code with comments like “TODO: FIX THIS IT’S A REALLY DANGEROUS HACK BUT I DON’T KNOW WHAT'S WRONG” that were written ten years ago. I haven't even mentioned the legions of people attacking various parts of the internet for espionage and profit or because they’re bored.
Article of interesting observations written by Sherry Turkel at the New York Times. It details, among other things, how we have "sacrificed conversation for mere connection".
Curating our digital selves
Interesting parallel of the digital world to the real world of advertising in which, as we all know, famous faces on magazines are never quite as they appear:
Texting and e-mail and posting let us present the self we want to be. This means we can edit. And if we wish to, we can delete. Or retouch: the voice, the flesh, the face, the body. Not too much, not too little — just right.
True self reflection requires trust
Why it's so hard to find (or post) anything of deep import amongst in the world of social statuses:
These days, social media continually asks us what’s “on our mind,” but we have little motivation to say something truly self-reflective. Self-reflection in conversation requires trust. It’s hard to do anything with 3,000 Facebook friends except connect.
We expect more from tech and less from each other
We expect more from technology and less from one another and seem increasingly drawn to technologies that provide the illusion of companionship without the demands of relationship. Always-on/always-on-you devices provide three powerful fantasies: that we will always be heard; that we can put our attention wherever we want it to be; and that we never have to be alone.
Sharing proves existence, a la Shakespeare
I share, therefore I am.
Unfortunately, our internet culture has led many to believe that if they don't share online they will be left out. They'll be forgotten. They'll cease to exist socially.
On a related note - Paul Miller, a journalist at The Verge, has made some interesting observations about our innate desire to not miss out. He observes that, when we 'miss out' on one thing (the internet), we get to spend our attention on something else (perhaps of greater import).
Ode to a time long gone
Not too long ago, people walked with their heads up, looking at the water, the sky, the sand and at one another, talking. Now they often walk with their heads down, typing. Even when they are with friends, partners, [and] children
This article by Brian Hoff at The Design Cubicle is probably the best post describing the web design field I've read in a while.
Businesses will spend $40,000 a year on an employee who works eight to five, five days a week. However, for lack of education they won't spend $1,000 on their hardest-working employee: their website. It works 24 hours a day, 365 days a year and interfaces with more clients than any other employee (perhaps even all their employees combined).
Your website is not a feature that you can half-ass. Spend some money. Protect your future. A good website works hard for your business. Much harder than many employees can offer.
I've worked at Arc90 (and technically still do) and I can verify Alex's observations. There truly is a thriving developer culture at Arc that exists because of the company traits he points out.
When your employee gets up in the morning, you want the most exciting thing in her day to be related to your company ... When your employee discovers something cool, you want their second thought to be “how can I use this at work?”.