The spec has been established, the requirements are written, the feature has been implemented. It’s time to test the feature. The underlying assumption is requirements are clear (if there are even written down at all!). Requirements can also be implicit, like when a spouse expected you to do something yesterday (no words communicated). Notice the different team members involved here:

  • Everyone: “Can you test against the requirements as I understand it and…”
  • Product Manager: “…as I understand the spec and scope?”
  • Developer: “…as I implemented it?”
  • Designer: “…as I designed it”
  • Me: “Are we all really aligned here?”

Words mean different things to different people. Today I will be exploring the topic of interpreting the written word, more specifically the ‘Mary had a little lamb’-heuristic which I discovered on ‘the four-hour tester’ website. Here is a snippet of that article, outlining how the stress of each word can change the meaning of one sentence:

Statement In contrast to
Mary​ had a little lamb … it was hers, not someone else’s
Mary had​ a little lamb … but she doesn’t have it anymore
Mary had ​a little lamb … just one, not several
Mary had a little​ lamb … it was very, very small
Mary had a little ​ lamb … not a goat, a chicken, etc
Mary had​ a little lamb … but John still has his

Google Calendar Reminders Exercise

Apply “Mary had a little lamb” to the second sentence of this quote: “You can add reminders in Google Calendar. Reminders carry over to the next day until you mark them as done. For example, if you create a reminder to make a restaurant reservation, you’ll see the reminder each day until you mark it as done.”.

Stress different words or groups of words and come up with at least 10 different interpretations.

The sentence to analyse is “Reminders carry over to the next day until you mark them as done.”. I admit I’m not familiar with the functionality of Google Calendar reminders, so this will be play pretend. You can view this as a realistic scenario, when a new tester (moi) is expected to pick up this piece of work.

Reminders

  • What exactly is a reminder as defined by Google calendar?
    • Are there different types of reminders?
    • Are reminders associated with different objects?
    • Third party integration with Google calendar?
  • Reminders (plural) means all reminders, is that right? You can’t apply this feature only to a single reminder?
  • On the release of this feature, what will happen to the existing reminders that people have? Will it apply to them?

carry over

  • Does ‘carry over’ affect the original reminder date? Is it a literal ‘carry over’?

next day

  • Next day in what time zone? At what point does it carry over?
  • Does ‘next day’ mean tomorrow? Can you create reminders every 2 days for example? If so, what happens if you change the frequency?
  • Does Google support other calendars like the Chinese lunar calendar? How is a day defined there?

until

  • Is ‘until’ a hard requirement? Is this customizable?
  • What about reminders that are never dismissed?

you

  • What about other calendar users?

mark them

  • What is the action of marking?
  • What about other actions other than ‘mark’? When you say ‘mark them’, that assumes multiple select.

as done

  • After marking as done, can you undo this action?
  • After undo-ing the action, can you redo it?
  • Is there a ‘done’ date? Is that marked as the now-time or is this not stored?

Interpretations

Did I get at least 10 interpretations? I’ve rephrased the requirements based on the above questions that I have laid out. I noticed that by analysing what has been said, my brain also brought out ideas of what has not been said. I drew upon my own experience and generated implicit requirements (e.g. points 6 and 7 below).

  1. All task reminders…
  2. All event reminders…
  3. Reminder dates are migrated to the next day… (as opposed to being shown as a reminder on the original day)
  4. Reminders are carried over at the date/time 0:00 next day in the calendar type and timezone of the reminder creator.
  5. Reminders are carried over at the date/time 0:00 next day in the calendar type and timezone of the reminder owner (not necessarily the one who created it).
  6. This feature is enabled by default and not customizable.
  7. This feature is disabled by default and is customizable.
  8. By ‘until’, we mean that reminders will carry over indefinitely.
  9. By ‘until’, we mean that reminders will carry over a maximum of 7 days (arbitrary duration) before being automatically dismissed.
  10. …until the reminder creator marks the task or event as done.
  11. …until anyone who has access to the calendar marks the task or event as done.
  12. Once the reminder has been actioned as ‘done’, this action cannot be reversed.
  13. Once the reminder has been actioned as ‘done’, this action can be reversed within 30 seconds (arbitrary duration).

Evaluation

Has your interpretation of the sentence become richer through this exercise?

Many of my interpretations relates to poor product knowledge. I am conscious of this, careful not to rush and label anything as a bug until I know more. Nevertheless, the mindset of applying this heuristic stresses the importance of the beginner’s mind. By asking questions about the new feature, you also question the existing product itself. This only adds to the number of possible test ideas to explore and experiment with. This is not as easy to do when you think you know the product. Overconfidence can limit critical thinking skills, which inhibits test ideas.

How different could two implementations be, if their developers did not share all of these interpretations?

It can be the difference between a two minute job or a two week job. Taking points 5 and 6 as an example, more testing will be required depending on how customizable it is. Now imagine you had 5 new features being developed in parallel with this one. The exponentially increases the time it would take to deliver the new updates! This proves to show that effective communication continues to be a crucial skill.

Based on your different interpretations, where do you think are good places to look for bugs?

Where do “I” think?

It’s not just what I think, it’s what ‘we’ think as a team. The place I would start is working with product managers and developers to review my interpretations. Just like communication, testing is a two-way street. We validate requirements and in the process of testing, we also learn new things about the product.

“Good places”? “Bugs”?

  1. Am I improving my product knowledge? What is a ‘bug’ to me? What is a ‘bug’ to the team? I do this by experimenting with the product, reading documentation, collaborating with the team. This aligns me with the team’s definition of a bug. By asking questions, I hope to also align the team’s understanding of how the product should work. The worst thing I want to do is report a bug ticket that is not valid.
  2. Am I seeking the right kinds of bugs? (my bug radar). Do stakeholders care about problems other than the feature ‘working’? Performance? Design? Understanding this is important for me to steer my testing in the right direction. Time is a limited resource, but my default approach has always been assess risk. The ultimate question is “What if a user is supposed to see a reminder but it doesn’t show up?”. Anything that does not answer that question, I deem low priority/risk.

“Based on your different interpretations”

Personally, I’m interested in the nouns of those statements (“Reminders” and “you”). What do they mean? Are there other nouns that are involved that we don’t know about? (e.g. third party integrations) I explore the ‘thing’ before exploring what the thing does. It logically makes sense to me.