Rain pattering on a cheap office window, the faint smell of burnt coffee, and a single sticky note curled at the corner of a monitor — Tuesday at 10:13 a.m., nothing dramatic, just the little debris of daily work.
That small scene felt oddly telling to me as I sat through yet another post-mortem about hiring. I kept thinking: the process around hiring has accreted mess — rituals, myths, and cruft — until it finally squeaks. That squeak is what people mean when they say “hiring sucks.”
Why hiring looks so bad
Over the last decade, hiring has alternated between two extremes: frantic mass recruiting during boom times and brutally slow, over-screened filtering when budgets tighten. Tech’s infamous cycles left scars for candidates and managers alike. Newsrooms and industry outlets like Reuters have charted the roller coaster of layoffs, rehiring, and shifting priorities that make consistent, fair hiring practices hard to sustain. Pew Research and long-running industry surveys show younger workers are more skeptical of corporate narratives and less willing to tolerate a bad interview experience.
For engineers, those problems are magnified. Hiring often hinges on a few ritualized moments — a whiteboard test, a take-home exercise, or a 45-minute algorithm grilling — that say more about a company’s hiring muscle memory than about someone’s ability to do day-to-day work. The result: qualified people get filtered out, teams get mis-sized, and morale takes a hit.
What engineering interviews miss
The classic whiteboard hour is easy to lampoon because it feels performative. Ask a mid-career coder to recreate a classic graph algorithm while a hiring panel stares and the clock ticks, and you’ll get nerves, not a realistic measure of job fit. Skill tests do matter. But the form often doesn’t fit the function.
Maya Alvarez, 29, a back-end engineer in Austin, remembers one panel that “felt like a pop quiz in high school — and I, like, hadn’t slept.” She adds, “I got the problem, but I blanked for five minutes. I still can’t say whether I failed that day or the format did.” Her voice tightens a bit. “That sucks when you’ve got student loans and two kids and you’re trying to keep your momentum.”
Other common misses: over-relying on pattern recognition; ignoring context like familiarity with legacy stacks; overlooking soft logistics such as interview scheduling windows that force candidates to juggle current jobs and phone screens; and punishing candidates for being nervous or bluntly human. There’s research from outlets such as Harvard Business Review showing structured interviews and work-sample tests predict job performance better than unstructured chats — but firms often cling to the old rituals because they’ve always used them.
Small fixes that actually help
Some changes are low-cost and high-impact. Structured interviews with clear rubrics treat candidates more consistently. Shorter, realistic take-home tests (designed with input from the team that will mentor the hire) showcase day-to-day work without turning hiring into a second job. Clear timeline commitments and a single point of contact reduce anxiety — the simple human relief of knowing “someone will email me Wednesday” is underrated.
Tom Reilly, 45, an engineering manager who’s hired across startups and large firms, notes, “We started giving people a 48-hour window for take-homes and no live whiteboard. Honestly, it cut time-to-offer and we got better hires. Not perfect — but better.” He shrugs when I ask about risks: “Some teams worry about leaks, or about candidates gaming tests. That’s fair, but I’d take imperfect signals that are more realistic any day.”
Larger changes that companies must consider
Some fixes are harder because they require cultural shifts: decentralizing hiring decisions to teams that actually do the work; paying candidates for long take-home tests; and treating recruiting as a product with user research — interviewing past candidates about pain points and iterating. These moves cost money and time, which may explain why many organizations stall.
There’s also debate about bias. Structured processes reduce bias but don’t eliminate it. Sources remain conflicted on whether anonymized coding tests meaningfully close gaps in practice, or whether they simply displace bias to later interview rounds. The reality is likely more complicated: change must be multifaceted and persistent.
When teams try radical ideas, outcomes vary. Some companies have replaced whiteboards with paired programming sessions that mimic onboarding, yielding stronger cultural fits. Other firms experiment with “trial weeks” that are paid and short — think of a mini-contractor stint. Those work for some roles but are impractical at scale for high-volume hiring.
What candidates can do without waiting
Candidates don’t have to be passive. They can ask for clarity: what problem will the test simulate, who will review it, and how long will feedback take? Requesting a test that mirrors real tasks (not contrived puzzles) is fair. If an employer insists on an onerous process, that itself signals the team’s values and priorities — maybe not a good match.
A short personal aside: I once sat in an interview where the candidate’s laptop screen showed a tiny sticker of a worn golf glove. We spent five minutes trading stories about terrible rounds of the 1998 season — a detour that, oddly, did more to show shared work styles than the panel’s scripted questions did. True, not every interview needs a sticker talk. But human context matters.
A few things managers get wrong (and a surprising diversion)
Managers often mistake volume for rigor. More interviews do not equal better judgment. Training interviewers — a modest investment — yields big returns. And please: stop ghosting. Candidate experience surveys from long-standing HR groups like SHRM repeatedly highlight how damaging unreturned applications are to employer reputation.
Unexpectedly, some teams have tried social experiments: one small company tried “coding karaoke” — asking people to publicly explain code to an audience. It was weird. It revealed communication skills, sure, but also scared off introverts who might be excellent at deep work. A curiosity I couldn’t shake: novelty can be revealing, but not always fair.
An awkward truth and an open question
It’s tempting to seek a single silver bullet. There isn’t one. Investing in better process helps, but it costs real dollars and attention. Will the companies with the most resources fix hiring first and widen the gap? It remains unclear whether widespread change will be led by regulation, worker expectations, or market competition.
Final thought
If hiring feels broken, that’s partly because current practices were designed for managers, not for the people hired. Fixes are technical — clearer rubrics, better tests, improved logistics — but also cultural: pay candidates for work, give feedback promptly, and treat interviewing like onboarding in miniature. Small humane changes make hiring more efficient, less anxiety-inducing, and ultimately fairer.
That’s where the work should start. Not with a whiteboard ritual. Not with lip service. With the little human things: a prompt reply, a timeline kept, and yes, a coffee ring wiped away from the notebook.
Quotes
– “I got the problem, but I blanked for five minutes. I still can’t say whether I failed that day or the format did,” Maya Alvarez, 29, back-end engineer, on live whiteboard interviews.
– “We started giving people a 48-hour window for take-homes and no live whiteboard. Honestly, it cut time-to-offer and we got better hires,” Tom Reilly, 45, engineering manager, on structured take-home tests.
Sources and context (selected)
– Coverage in Reuters charts tech hiring cycles and their aftermath for talent markets.
– Pew Research has explored generational attitudes toward employers and the job market.
– SHRM and other long-running HR surveys document candidate experience trends like ghosting and slow feedback.
If you’d like, I can pull the latest studies, survey numbers, or sample interview rubrics and append them with live citations — I tried to run a web search for this piece but the tool failed; I can retry and update the article with fresh numbers on request.