But wait, there’s more.
One of the pure pleasures of thinking hard about something is the number of spinoff thoughts that result and how much more you have to think about by the end of it!
Thinking and then writing about AI notetakers turns out to be one of those things. So an update.
Usage patterns that work
Since writing the initial post about AI notetakers, I’ve had two very interesting conversations with people who love using their notetakers. And the interesting thing is that how they use them sidesteps many of the challenges of permissions, inaccuracy and hallucinations that have been worrying me.
(1) Asking permission
I was visiting my GP (General Practitioner aka Primary Care Physician / Family Doctor) on a recent Wednesday. After we sat down and exchanged pleasantries she asked me if it was OK if she recorded our conversation with her AI notetaker.
Fantastic. She asked! If she can, we all can.
Given my professional interest in this space, I asked which software it was - PatientNotes.app - whether they stored their data onshore in Australia and what the deletion policies were.
My (awesome) GP paused, considered, said that she didn't know for sure as she had been using it for a while now, but it was an Australian company and she believed the storage was in Sydney. She said she would definitely check and get back to me, I said I was happy for PatientNotes to listen in and thought no more about it.
Fast forward to the following Friday (just two days !) and my GP sent me the following.
Dear Kendra
I just wanted to let you know that I checked and can confirm that the data from
PatientNotes is stored domestically is Sydney.
You will no doubt understand this better than I do, but they say that records are deleted 30 days after they are made, and I always delete mine at the end of the week anyway.
The webpage is https://www.patientnotes.app/privacy-and-compliance
If you would like to check further.
Now quite frankly in awe of this busy GP, I did of course check out the PatientNotes.app website and I'm impressed. They have a page on Security and a page on Privacy & Compliance.
Easy to find, easy to read. Kudos to Darren Ross, Sarah Moran and the team at PatientNotes.app
(2) Reviewing immediately
After posting on LinkedIn about my GP interaction, I fell into conversation with a New Zealand based lawyer who is a confident and frequent user of AI tools in his practice. Like my GP, he uses the AI notetaker tool during a client/patient consultation in place of the handwritten notes he habitually used to take.
Crucially, and this was true of my GP as well, he reviews the transcript and summary IMMEDIATELY after the consultation and makes edits. Then he saves the edited work as a record, effectively building on a note taking practice he already had in place for many years.
Contrast this with the anecdotal behaviour of many recent AI notetaker users - saving the transcript / summary UNREAD as a record of the conversation to refer to later, reading the transcript / summary of a meeting they DID NOT ATTEND to catch up with the content.
An telling example of the AI + human system working better for some workflows than for others.
And then Granola!
Just as I was pushing publish on the original post, I was in a conversation where people were discussing AI notetakers and their occasional experience of joining a call where there were no other humans, just AI notetakers. Then one of the women mentioned that ‘at least you could see the notetakers’. She had been told there were now AI notetakers that didn’t appear in the call at all. And voila, I discovered Granola.
Um, yeah, so as my new lawyer friend had commented - it might be safest to now assume that you are being recorded at all times.
Post image credit to Lucas Alexander on Unsplash
Great to see people trying to use AI note-takers responsibly, e.g. editing/correcting them while their memory is still fresh. Another trap is that sometimes these tools will use the data they collect for training their models. If confidential data / trade secrets are discussed on calls, this is potentially a way for this information to leak. Sometimes you need to pay extra (the "enterprise" plan) so that collected data is not used to train models.
We used to be so paranoid about privacy. Now it seems that this 'right' is no longer as easy to monitor.