Gateway
Explorative Workshop for Navigating Anchor
Orienting pivot node that influenced consistency in human-algorithm interaction
To understand the “undemocratic” algorithms, I conducted a workshop with frontline users taking "unfair recommendations" which triggered trust crisis as a starting point. Leveraging Google's personalisation tools, I analysed how personas are constructed by different datas types. Through simulation experiments, I observed how #hashtag are formed in data synthesising, and studied the growth patterns of data correlation, for further deducing the oblivious relationship between digital persona and data biases.
From quantitative to qualitative change, variables would becomes constants, then might expanding more variables. Active bias is a cocoon status reinforced by single user behaviour, it also stimulate more passive bias which is a group stereotype imposed by algorithms. Referred to the “Bias-Variance Trade-off”, the narrowed design direction is to bring low bias experience in 2 approaches: more precise or more diverse; scoping on the co-interaction where bias formed and solidified.
Design Spectrum → “Persona” Data Performance Management in 2 Variance Modes

Design Control Shrift in Intelligible Structure
Motivating easy self-concern operation for bias countering and sustainable integrity
The individual users are the victims of biased data and may also be the initial source, but the learning cost of algorithms is heavy for the general public, current "fight magic with magic" solutions (e.g. dummy data) only offer superficial relief. People feel their individual's strength is so limited to counterbalance the existing result from the mass.
Therefore, AI should be deployed strategically for sustainable integrity, making data in machine learning process more managable, turning individuals into contributors for helping eliminate algorithmic biases. By persona fine-tuning in data correlations to prevent groupthink and heuristic exploration that breaks through personal subjectivity, integrity could be developed organically.
In other words, empower human having better initiative on their data, to say, letting everyone have voice in AI era then action on challenging societal biases, benefiting not just themselves but all of us. This is the core purpose of this design.
Feature Scaling → Persona Fine-Tuning + Heuristic Exploration
Visual System Design → Modular "Gene" + Canvas Display

Takeaway
AI is smarter than humans, but it’s human who truly bear the risks and consequences. How can we ensure the information that AI present for us is justice or without intentive selecting? From designer perspective, I believe the focus of AI design at the current phase should be to help people first adapting, then guiding them be more independent in AI allying, instead of being ruled by AI.