Creating user interfaces, especially implementing them, is not high on many peoples' list of favorite things to do. Even for designers who enjoy engineering an experience, working with programmers to implement their ideas can be frustrating.
I've worked on small UIs where I do everything myself, and I've worked on high profile projects with dozens of people involved. During that time I've used several UI toolkits on various platforms. While each had their differences, the issues that came up were usually the same.
- Even when you're a one man team, converting a design into a functional implementation can take significant time and effort. This gets worse as the complexity of your UI increases.
- Bugs Caused by State
- Even a simple UI can very quickly become a tangled mess internally. How one widget looks depends on another, which depends on user input and so on. Inevitably, someone manages to get the UI in a funky state. Reliably reproducing it can be dificult and end up with bug reports resembling "tap this area really fast while while rotating the device back and forth." That's an actual bug report I once had to deal with.
- I've spent my fair share of hours going through memory dumps of an application trying to find what went wrong and where. But even with a perfect system bugs will still happen. The goal is to make problems obvious and easily fixed.
- When a toolkit has perfect information on how a UI works with data, a great deal can be optimised in the toolkit. This lets your UI use far fewer system resources than when code might change any part of the UI at any time.
- Left unchecked a UI's code base can quickly become a nightmare to maintain. During the lifetime of a UI's development, requirements can change frequently and drastically. You're usually on a tight scedual making it expensive to refactor code.
There are other issues I could get into, but these tended to be the most painful. Something I noticed was that the root of all these issues, directly or indirectly, was that code is not well suited for building UI. So how do we remove code from the equation? Or at least reduce it to a minimum?
Any application that does anything has state; the current song in a music app, a character's inventory in a video game, etc. User interfaces also have state; a button is depressed, a menu is open, and so on.
A stateless UI refers to when there is no implicit state. Every state your UI can be in is explicitly defined as input to the system. No matter when or how many times you give the same set of inputs, you will always see the same result. What a pure stateless UI toolkit looks like, or if it would even be desirable, is debatable. What we can do is apply stateless design to the extent that it's convenient.
In the model illustrated above the application code only knows about data. The runtime is the codebase that uses the UI Definition to create and draw widgets based on what data is provided. When we restrict code to only modifying data inputted into the system, code can't set any implicit state that might affect how the system handles data. This drastically reduces the surface area of where things can go wrong.
We won't be going into all of them, but removing the option for code to directly interact with the runtime has many benefits for performance, stability, forward compatibility, and more.
The most important thing we get from a data-only interaction is that the UI Definition doesn't use code. How a UI looks and uses data has zero ties to code. This is imperative for creating good tools that are friendly to non-programmers.
Before we jump into tooling, let's look at a quick example of how this data-only model might work for a music player app:
In this example, the code is only interested in the data it needs to perform its function. It has no information about how wide the seek bar is, or which songs are being displayed to the user. Meanwhile the User Interface Definition controls what data to display and how to display it. This is how we can achieve making significant changes to a UI design without requiring a programmer to modify code. The design of the UI is kept separate from the inner workings of the application.
What a stateless UI toolkit looks like in practice
Using code to create widgets and manage user interaction, the UI Definition, can very quickly become a complete mess. Traditional code based systems make optimisation very difficult for a runtime to handle. Runtimes have to make assumptions on how you implement your UI. This puts the onus on creators to avoid breaking those assumptions.
When we set out to make uiink, we wanted to create a tool that was great at managing and containing a UI's complexity. To do this we needed very clean boundary lines. The runtime needed to know about every possible way a particular UI might react to data and user input. By adopting a strict data-only API, uiink is able to heavily optimise your UI and help you avoid pitfalls when creating.
One of the challenges with this strict data-only approach is when the UI needs to have complex, custom behavior. Quill, uiink's authoring tool, handles this by providing a large collection of nodes. A node takes data and processes it for other parts of the UI to react to. Given the same inputs, the node will always give the same outputs.
The graph of nodes you end up creating can be optimized both before and during runtime. Because uiink knows exactly what will happen in each node, it doesn't have to worry about code arbitrarily changing things while it's processing logic. How we take advantage of this from a technical standpoint is a bit outside the scope of this article.
The second piece of the puzzle is what we call an "impulse", which is state the UI can generate and send back to code. This is used when you have a button that, when clicked, your application needs to do something outside the scope of the UI. Like make a network request or fetch data from a database.
Copyright © 2017 Marshall Two LLC