A “No Reports” Application Design Framework

Database applications can be built in a way that users can use them to answer many of the day to day questions that come up about their data. There are framework that allow the users to build their own custom reports, but we are thinking of a design where the user’s questions can be answered in the same forms that they use for day to day data entry.


Case Study II:that old Devil, Time

If we add a “number of minutes” filed to our Cases table we can calculate the average number of minutes dynamically.


As our employees get faster or slower resolving certain categories of cases our query will update the averages.

Perhaps we realize that some employees are faster than other with certain categories of cases. We can factor that into our query.


Now we can display all the information needed to decide which user to assign a new case to.


For each emplooyee we show the average minutes they take to complete this category of task, the number of active cases they have and the total minutes of work expected for those cases.

We can imagine that for different customers the same employee might have different average times for some categories. We could assign a “difficulty factor” to the customer and then factor that into our calculations. An exercise left to the reader.

A Case Study: Starting with a Microsoft Sample Access Application

This is a “Customer Service” application. Here is the Case detail form.


This sample application takes a simple approach, and thus it shows clearly the difference between an application that records what has happened and one that tries to make use of data to anticipate what might happen.

These cases know who they were assigned to and when they were opened and when they were closed, but, when the user opens the drop down to assign an employee to a case is there some help we could provide if the application knew more? We can query to see how many open cases each employee has and add that information to the list of employees in the drop down.


Now new cases can be given to employees who do have fewer cases already assigned to them.

It could well be that not all cases are equal, some might take 5 minutes to complete and some 5 hours. Perhaps it would help if the person doing assignments had some idea of the workload assigned instead of just the number of cases. Looking at the other data we are collecting we see that cases are assigned to different “Categories”. We will add a field to the Category table that tells the average number of minutes needed to deal with a case of that category.

To do this we have to add a table to our application. The original category drop down just used a list of category names. Now, because “category” needs to support its own data, it needs to become an object represented by a table.


Now we can add the number of estimated minutes to our workloads.


We can see that employee 1 has 6 times the workload of employee 2, even though they have the same number of cases.

There is more information we can add to our form, so that the user, while doing their data entry can see the general context within which they are working.


We have provided a view of the average time to complete a case of the assigned category, and also a view of the average response time for this customer. Hopefully, after entering multiple cases, this view will lead to questions like “why are we so often overdue in completing this category of case?” or “why is our average for closing cases on time so much better for some customer than it is for others?”

What we have done is to take some elements that would normally be visible in reports, like “what is the average time for completing this category of case?”, and made them visible to the user while they do their daily work. This does not substitute for reports, but, hopefully, it will spark the user’s curiosity.


Batch Data Entry: Illustrated

Read this post for more about “batch” design.

The basic outlines of a form designed for batch data entry. In this case the records have been filtered to create a batch, but no individual record has been selected.


The result of this layout is that the user sees the statistics for the batch because they have to filter the batch before they can click on an individual record. They may not pay much conscious attention, but see the statistics will gradually have an effect.

When they click on a record in the current batch the layout changes to display the record they have selected.


Now they see the individual record, but we still show them statistics.

These layouts take advantage of the fact that most people use much bigger monitors these days, and thus have room to see more context for the data they are working on. They also take advantage of layouts designed for viewing on phones, where responsive design would use the same blocks of information.

Batch Data Entry: the first step towards data literacy

All triggers are bulk triggers by default, and can process multiple records at a time. You should always plan on processing more than one record at a time. –Salesforce

Database applications are typically split into data entry and reporting. For data entry they offer different ways of finding the record you wish to create or change. For reporting they offer different ways of filtering and summarizing your records.

Typically analytics is applied to reporting, but users spend much more time entering and updating data than they do looking at reports. The idea that all users should be able to do analysis of the data means that we have to convince users who do the bulk of their work in data entry screens to also go look at reports or dashboards. This is probably going to be viewed as “extra” work on their part and it will be hard to convince them.

We can borrow a metaphor from Salesforce and make all data entry deal with “batches” instead of “records”. We place on the same form a way to select a batch of records. Then a way to select one of the records to view or edit its details. For every “batch” of records we display summary statistics on the same form. The user must select a batch in order to select a record, and every time they select a batch they also see the statistics associated with that batch.

Email offers an everyday example of dealing with things in “batches”. Your inbox is a batch of emails. If you search for an email address you see a batch of emails that match your search. When you click on a record in a batch you then see the full record. Email clients sometimes¬†use a different screen to display the full record… but our model is going to try to display both the batch and the selected record on the same form.

One of the big problems with trying to improve your organization’s data culture is that thinking about data¬†in terms of its context is much harder than thinking about it in terms of the content of a particular record. A “batch” based interface lets the user’s come to grips with their data’s context by constantly exposing them to it as they do their daily work. You could compare it to the list of ingredients on the side of cereal boxes as opposed to the advertising copy on the front of the box. Using a “batch” interface gradually improves the user’s abilities to “think” about their data. You could think about it like a sports team that spends time before every practice doing exercises. The exercises strengthen the team members and make them more able to learn the skills demanded for their sport.

One immediate consequence (in the applications I have built in this way) is that when the users are filtering a batch of data they often immediately see some records which should not be in that batch. For example, they are filtering for active members, but they see a name in the batch they know is not an active member. Because they can click on the offending record and “fix” it, seeing their data in terms of “batches” means that the batches will become “better” as the records in them become more correct.

A “batch” style interface can be constructed in any programming environment. I have posted some wire-frame drawings that may help in this post.