You’re describing a real coordination problem: over-polished, abstraction-heavy “AI voice” increases cognitive load and reduces signal. Since you don’t have positional authority—and leadership models the behavior—you need norm-shaping, not enforcement.
Here are practical levers that work without calling anyone out:
1. Introduce a “Clarity Standard” (Not an Anti-AI Rule)
Don’t frame it as anti-AI. Frame it as decision hygiene.
Propose lightweight norms in a team doc or retro:
TL;DR (≤3 lines) required
One clear recommendation
Max 5 bullets
State assumptions explicitly
If AI-assisted, edit to your voice
This shifts evaluation from how it was written to how usable it is.
Typical next step:
Draft a 1-page “Decision Writing Guidelines” and float it as “Can we try this for a sprint?”
2. Seed a Meme That Rewards Brevity
Social proof beats argument.
Examples you can casually share in Slack:
“If it can’t fit in a screenshot, it’s not a Slack message.”
Side-by-side:
AI paragraph → Edited human version (cut by 60%)
You’re normalizing editing down, not calling out AI.
Typical next step:
Post a before/after edit of your own message and say: “Cut this from 300 → 90 words. Feels better.”
3. Cite Credible Writing Culture References
Frame it as aligning with high-signal orgs:
High Output Management – Emphasizes crisp managerial communication.
The Pyramid Principle – Lead with the answer.
Amazon – Narrative memos, but tightly structured and decision-oriented.
Stripe – Known for clear internal writing culture.
Shopify – Publicly discussed AI use, but with expectations of accountability and ownership.
You’re not arguing against AI; you’re arguing for ownership and clarity.
Typical next step:
Share one short excerpt on “lead with the answer” and say: “Can we adopt this?”
4. Shift the Evaluation Criteria in Meetings
When someone posts AI-washed text, respond with:
“What’s your recommendation?”
“If you had to bet your reputation, which option?”
“What decision are we making?”
This conditions brevity and personal ownership.
Typical next step:
Start consistently asking “What do you recommend?” in threads.
5. Propose an “AI Transparency Norm” (Soft)
Not mandatory—just a norm:
“If you used AI, cool. But please edit for voice and add your take.”
This reframes AI as a drafting tool, not an authority.
Typical next step:
Add a line in your team doc: “AI is fine for drafting; final output should reflect your judgment.”
6. Run a Micro-Experiment
Offer:
“For one sprint, can we try 5-bullet max updates?”
If productivity improves, the behavior self-reinforces.
Strategic Reality
If the CEO models AI-washing, direct confrontation won’t work. Culture shifts via:
Incentives (brevity rewarded)
Norms (recommendations expected)
Modeling (you demonstrate signal-dense writing)
You don’t fight AI. You make verbosity socially expensive.
Is it? I am using AI daily, but would rank it dead last compared to food, water, shelter, heating, transportation, education, healthcare, communications
Countries in which the income disparities ARE so high are also the ones where the "poor" are the richest. They just feel poor in comparison not in absolute terms.
70K a year is poor in California, but top 1% rich in almost any country in the world.
Low income disparities are countries like Albania, Afghanistan, Armenia to name the first three with below 30 GINI income.
This is an anomaly and left over from the time when middle class was growing after the 2nd world war. We (Western countries) are dismantling all the back stops and the process will reverse and move all the wealth to the few rich people in the capital class. When this process is complete the poverty levels in the west will equal those of the countries you mentioned, Afghanistan etc.
The USA and UK are leading the process since they started to pursue this goal aggressively during the 80s with Reaganism and Thatcherism.
and you’re claiming the process still isn’t complete more than 40 years later? shouldn't the wealth gap between the poor in the US vs the poor in Afghanistan be starting to get smaller if your argument is correct?
And this is exactly why nominal $ amount comparisons are completely pointless. Someone who makes $70k in southern or eastern Europe is living like a king (or living at least good life anywhere in Europe) while someone making $70k in expensive parts of California is going to struggle.
Wealth is equal to your share of the overall resources, $ amounts are just an abstraction.
as an old coworker once said, when talking about a certain manager; That boy's just smart enough to be dumb as shit (The AI, not you; I don't know you well enough to call you dumb)
What didn't you understand? The point isn't whether I do specific thing A or specific thing B, the point is that when I can I do the best in the situation to improve the average. The specifics don't matter. It is the overall impact. OP is playing the "debate" game which is about winning, and not about the issue itself. It is because OP doesn't care to understand, they just want to score points, hence their desire to focus on specific instances.
Had OP said something like "How can you make an informed decision congruent with your ethics when so many ubiquitous companies violate human rights?" that would have been a genuine question. Instead OP said "Tell me why you don't do X" and behind that is "because I win." That's arguing from bad faith (a polite way to describe OP).
You said AMA, he asked a very simple question. You can not answer that very simple questions. He wins because he is almost surely correct in his assumptions about you, not matter how much you weasel around it.
I'm sorry you don't understand my answers. Like, at all. Maybe calm down and re-read my responses when you have a clear head? It's all spelled out multiple times.
1. Introduce a “Clarity Standard” (Not an Anti-AI Rule) Don’t frame it as anti-AI. Frame it as decision hygiene. Propose lightweight norms in a team doc or retro:
TL;DR (≤3 lines) required
One clear recommendation
Max 5 bullets
State assumptions explicitly
If AI-assisted, edit to your voice
This shifts evaluation from how it was written to how usable it is. Typical next step: Draft a 1-page “Decision Writing Guidelines” and float it as “Can we try this for a sprint?”
2. Seed a Meme That Rewards Brevity Social proof beats argument. Examples you can casually share in Slack:
“If it can’t fit in a screenshot, it’s not a Slack message.”
“Clarity > Fluency.”
“Strong opinions, lightly held. Weak opinions, heavily padded.”
Side-by-side: AI paragraph → Edited human version (cut by 60%)
You’re normalizing editing down, not calling out AI. Typical next step: Post a before/after edit of your own message and say: “Cut this from 300 → 90 words. Feels better.”
3. Cite Credible Writing Culture References Frame it as aligning with high-signal orgs:
High Output Management – Emphasizes crisp managerial communication.
The Pyramid Principle – Lead with the answer.
Amazon – Narrative memos, but tightly structured and decision-oriented.
Stripe – Known for clear internal writing culture.
Shopify – Publicly discussed AI use, but with expectations of accountability and ownership.
You’re not arguing against AI; you’re arguing for ownership and clarity. Typical next step: Share one short excerpt on “lead with the answer” and say: “Can we adopt this?”
4. Shift the Evaluation Criteria in Meetings When someone posts AI-washed text, respond with:
“What’s your recommendation?”
“If you had to bet your reputation, which option?”
“What decision are we making?”
This conditions brevity and personal ownership. Typical next step: Start consistently asking “What do you recommend?” in threads.
5. Propose an “AI Transparency Norm” (Soft) Not mandatory—just a norm:
“If you used AI, cool. But please edit for voice and add your take.”
This reframes AI as a drafting tool, not an authority. Typical next step: Add a line in your team doc: “AI is fine for drafting; final output should reflect your judgment.”
6. Run a Micro-Experiment Offer:
“For one sprint, can we try 5-bullet max updates?”
If productivity improves, the behavior self-reinforces.
Strategic Reality If the CEO models AI-washing, direct confrontation won’t work. Culture shifts via:
Incentives (brevity rewarded)
Norms (recommendations expected)
Modeling (you demonstrate signal-dense writing)
You don’t fight AI. You make verbosity socially expensive.
If helpful, I can draft:
A 1-page clarity guideline
A Slack post to introduce it
A short internal “writing quality” rubric
A meme template you can reuse
Which lever feels safest in your org right now?
reply