A “No Reports” Application Design Framework

Database applications can be built in a way that users can use them to answer many of the day to day questions that come up about their data. There are framework that allow the users to build their own custom reports, but we are thinking of a design where the user’s questions can be answered in the same forms that they use for day to day data entry.

 

Case Study II:that old Devil, Time

If we add a “number of minutes” filed to our Cases table we can calculate the average number of minutes dynamically.

average_number_of_minutes_calculated

As our employees get faster or slower resolving certain categories of cases our query will update the averages.

Perhaps we realize that some employees are faster than other with certain categories of cases. We can factor that into our query.

average_number_of_minutes_by_employee

Now we can display all the information needed to decide which user to assign a new case to.

assigned_to_extended

For each emplooyee we show the average minutes they take to complete this category of task, the number of active cases they have and the total minutes of work expected for those cases.

We can imagine that for different customers the same employee might have different average times for some categories. We could assign a “difficulty factor” to the customer and then factor that into our calculations. An exercise left to the reader.

A Case Study: Starting with a Microsoft Sample Access Application

This is a “Customer Service” application. Here is the Case detail form.

assigned_to

This sample application takes a simple approach, and thus it shows clearly the difference between an application that records what has happened and one that tries to make use of data to anticipate what might happen.

These cases know who they were assigned to and when they were opened and when they were closed, but, when the user opens the drop down to assign an employee to a case is there some help we could provide if the application knew more? We can query to see how many open cases each employee has and add that information to the list of employees in the drop down.

employee_with_number_of_cases

Now new cases can be given to employees who do have fewer cases already assigned to them.

It could well be that not all cases are equal, some might take 5 minutes to complete and some 5 hours. Perhaps it would help if the person doing assignments had some idea of the workload assigned instead of just the number of cases. Looking at the other data we are collecting we see that cases are assigned to different “Categories”. We will add a field to the Category table that tells the average number of minutes needed to deal with a case of that category.

To do this we have to add a table to our application. The original category drop down just used a list of category names. Now, because “category” needs to support its own data, it needs to become an object represented by a table.

category_table

Now we can add the number of estimated minutes to our workloads.

average_number_of_minutes

We can see that employee 1 has 6 times the workload of employee 2, even though they have the same number of cases.

There is more information we can add to our form, so that the user, while doing their data entry can see the general context within which they are working.

form_with_extra

We have provided a view of the average time to complete a case of the assigned category, and also a view of the average response time for this customer. Hopefully, after entering multiple cases, this view will lead to questions like “why are we so often overdue in completing this category of case?” or “why is our average for closing cases on time so much better for some customer than it is for others?”

What we have done is to take some elements that would normally be visible in reports, like “what is the average time for completing this category of case?”, and made them visible to the user while they do their daily work. This does not substitute for reports, but, hopefully, it will spark the user’s curiosity.

 

Batch Data Entry: Illustrated

Read this post for more about “batch” design.

The basic outlines of a form designed for batch data entry. In this case the records have been filtered to create a batch, but no individual record has been selected.

batch_before_selection

The result of this layout is that the user sees the statistics for the batch because they have to filter the batch before they can click on an individual record. They may not pay much conscious attention, but see the statistics will gradually have an effect.

When they click on a record in the current batch the layout changes to display the record they have selected.

batch_after_selection

Now they see the individual record, but we still show them statistics.

These layouts take advantage of the fact that most people use much bigger monitors these days, and thus have room to see more context for the data they are working on. They also take advantage of layouts designed for viewing on phones, where responsive design would use the same blocks of information.

Batch Data Entry: the first step towards data literacy

All triggers are bulk triggers by default, and can process multiple records at a time. You should always plan on processing more than one record at a time. –Salesforce

Database applications are typically split into data entry and reporting. For data entry they offer different ways of finding the record you wish to create or change. For reporting they offer different ways of filtering and summarizing your records.

Typically analytics is applied to reporting, but users spend much more time entering and updating data than they do looking at reports. The idea that all users should be able to do analysis of the data means that we have to convince users who do the bulk of their work in data entry screens to also go look at reports or dashboards. This is probably going to be viewed as “extra” work on their part and it will be hard to convince them.

We can borrow a metaphor from Salesforce and make all data entry deal with “batches” instead of “records”. We place on the same form a way to select a batch of records. Then a way to select one of the records to view or edit its details. For every “batch” of records we display summary statistics on the same form. The user must select a batch in order to select a record, and every time they select a batch they also see the statistics associated with that batch.

Email offers an everyday example of dealing with things in “batches”. Your inbox is a batch of emails. If you search for an email address you see a batch of emails that match your search. When you click on a record in a batch you then see the full record. Email clients sometimes use a different screen to display the full record… but our model is going to try to display both the batch and the selected record on the same form.

One of the big problems with trying to improve your organization’s data culture is that thinking about data in terms of its context is much harder than thinking about it in terms of the content of a particular record. A “batch” based interface lets the user’s come to grips with their data’s context by constantly exposing them to it as they do their daily work. You could compare it to the list of ingredients on the side of cereal boxes as opposed to the advertising copy on the front of the box. Using a “batch” interface gradually improves the user’s abilities to “think” about their data. You could think about it like a sports team that spends time before every practice doing exercises. The exercises strengthen the team members and make them more able to learn the skills demanded for their sport.

One immediate consequence (in the applications I have built in this way) is that when the users are filtering a batch of data they often immediately see some records which should not be in that batch. For example, they are filtering for active members, but they see a name in the batch they know is not an active member. Because they can click on the offending record and “fix” it, seeing their data in terms of “batches” means that the batches will become “better” as the records in them become more correct.

A “batch” style interface can be constructed in any programming environment. I have posted some wire-frame drawings that may help in this post.

 

 

The Gallup Q12

The Gallup Q12 Index Gallup’s employee engagement work is based on more than 30 years of in-depth behavioral economic research involving more than 17 million employees. Through rigorous research, Gallup has identified 12 core elements — the Q12 — that link powerfully to key business outcomes. These 12 statements emerged as those that best predict employee and workgroup performance. The Twelve Questions are: 1. Do you know what is expected of you at work? 2. Do you have the materials and equipment to do your work right? 3. At work, do you have the opportunity to do what you do best every day? 4. In the last seven days, have you received recognition or praise for doing good work? 5. Does your supervisor, or someone at work, seem to care about you as a person? 6. Is there someone at work who encourages your development? 7. At work, do your opinions seem to count? 8. Does the mission/purpose of your company make you feel your job is important? 9. Are your associates (fellow employees) committed to doing quality work? 10.Do you have a best friend at work? 11.In the last six months, has someone at work talked to you about your progress? 12.In the last year, have you had opportunities to learn and grow?

Engaged Employees

Study after study shows that engaged employees are more loyal, they’re more productive and they yield increased customer satisfaction, higher financial returns and greater shareholder values..

from a 2016 Dreamforce session “How Salesforce Uses Culture + Tech to engage employees” https://www.youtube.com/watch?v=JTHhwjJXn54

The “State of the Global Workplace” study found 13% of employees engaged (“inovating”), 24% actively disengaged (“sabotaging”) and 63% disengaged (“sleepwalking”)

Data Tai Chi

taichiI used to know a guy who would say “Tai Chi is not a martial art. Tai Chi is the exercise a gentleman does every morning so that he will be able to practice martial arts.”

The same is true of enhancing your organization’s data culture. We call this “Human Intelligence“, a collection of practices that you do in your organization to prepare it for the demands of living in a data culture. No matter which of the many tools or systems of analytics you try to use you will need people who work well in certain kinds of groups, people who are happy in an environment that demands constant learning, people who can navigate situations that are full of unknowns and risk.

Every situation is both a challenge to practice and an invitation to learn, and, in the end, being able to learn is a harder thing than being able to practice. I always remember my father, who was brilliant at his work, saying “I learn more now in a day than I used to learn in years.”

To be able to do effective work and effective learning individuals are dependent on the organizational culture that supports (or hinders) them. Organizations are dependent on the individuals who work within them to build and maintain the data cultures they need to thrive and survive.

These two principles, Mutual dependence and prioritizing learning are the core of a program of Human Intelligence. The mutual dependence between individuals and their groups and the immense  advantage granted if they can leverage every situation, whether the outcome is positive or negative, to drive themselves forward into new insights and new skills, is something that must be worked at and exercised by all organizations that want to succeed.

The Data Culture Uplift Workbook: 7 Steps towards Change

workbook-image3 If you think your organization needs to improve its data culture, what can you do ? What is a practical course to take to get achieve that improvement?

Step 1: Set a baseline. Have an organization wide discussion about what you are going to attempt and then have everyone take the Data Fluency Inventory (a survey available on this site). The result will be a score that indexes your organization’s current data culture. This will give you a base for discussing what changes you should attempt and whether or not the changes you try are successful.

Step 2: Get help. A lot of the problems around data culture come from the tunnel vision that all of us have about our daily activities. Find a helpful outsider, a teacher, a tutor, or a helpful consultant that can be part of your discussions and who will be able to look at your organization with fresh eyes.

Step 3: Try to decide what kind of organization you are in terms of your data culture.

Are your people already data aware and your decisions already guided and evaluated using data ? If so you are in  “recon pull”, you can give mission style orders (where you tell people what their goal is but not how to get there) and expect group leaders to be flexible and creative as they respond to conditions on the ground.

Does your organization mostly collect historical data? Do you use reports to illuminate the past but not to guide future decisions. Do your members require education and encouragement to come to grips with the complexity of their data ? This is an organization in “command push” mode. You will need to give people detailed plans to follow while you put programs in place to up their game. You will need to look at your hiring process and at what kind of training and opportunities for independent work you offer. There is nothing wrong with finding yourself in this mode. It just means you have a lot of training and build  up to do. Its a good thing to know that, right now, you should focus on walking rather than trying to run.

When you make a self assessment you are in a position to try to make plans to change. The plans you make and the things you try should match the current state of your data culture.

Step 4: Start working on a data flow map. In order to come to grips with your data culture you will need to know what data is flowing through your organization, who is changing it and where it can be found. There are tools you can use to make your map, or you can just put it in a spreadsheet or a word document.

The important things about a data flow map are true about the all such maps, no matter how complex they are.

The map, and changes to the map, need to be part of your daily discussions about data.

The map will never cover all of your data flows, but the effort of maintaining it will boast your data culture no matter how complete, or incomplete, it is.

Step 5. Add some habitual routine to your daily practice that makes you look up from your tunnel focus on the current problem and the current state of your data. You can decide that every time you find yourself in a discussion about data your will play the “penny game”. Or you can play “kick the bucket” and put three little buckets on your conference table and take the time to discuss what data you think is in each bucket before you make any decisions.

Step 6. Make plans for change. Implement your changes. The plans you make and the way you try to implement them will be different depending on your own self-assessment as an organization. But while you’re making your plans and while your implementing them, take the chance to practice the penny game or kick the bucket. As you work through your plans diagram the pieces of your data flows that your are thinking about into your data flow map.

Step 7. After you have worked away at your changes for a while….Go back to step 1 and repeat.

Have another organization wide discussion and go back and have everybody take the Data Fluency Inventory again. Compare this score with your base line score. Discuss what you expected to happen and what actually happened. Talk about what you might try next.

Just like an exercise program, don’t expect your success to be spectacular, and do expect that you will often fall back and have to start the process again.