Questions

What does a "legible representation" of an AI system actually look like?

When can a human look at a system's state and actually understand what it will do next? This cuts across visualization, inspection tools, and the fundamental question of how much complexity can be made transparent.