< Back

Now Playing

Ryan from Synqtech #2

Of, hey, we're, you know, we're a Google shop or we're Amazon shop or something like that.

So I think for most of our customers, they're perfectly happy with it being on Microsoft stack, but kind of at the business level, there's a weird pressure, I guess, to like, if you could not be locked into the one that one of those platforms could speak, then that's a more attractive business, potentially more valuable business

in their eyes.

Because they want to.

It causes us to explore a little bit, but I really don't want to do that development work to like, let's make our platform fully agnostic or whatever SaaS provider, that's a ton of effort for zero additional sales in the next year.

Okay, yeah, that's sort of, I guess, how we see the vendor lock-in, but not so much from like a customer.

I think they're all I'm extremely happy that it's a Microsoft stack because they're all already using Microsoft, so.

Yeah, so I guess like Ryan, this is, would you say this is like a top three pain for you?

It would be three if it's in the top three.

Okay.

I mean, our main one is like audio quality for us because we've got the unique unique radio input.

That's just so low quality.

So that's far and away number one.

Number two is probably more on the manufacturing side because there's physical equipment.

How do we deploy that?

How do we update that?

That's kind of ugly.

And then we're into the

turn taking, kind of like,

are we, because we're sort of like a passive listener

to the audio stream.

Half of the conversations aren't for us, so when is it our turn?

We don't know when to jump in, so we've got to rely on keywords and things like that to try and kick it into gear.

As far as like,

I guess getting at your number one, for us that, It isn't so much about the metrics,

but if we were to compute a metric on audio quality and are we getting a good result?

I think it's probably less the model's fault and more the source audio that's the problem.

But we don't really have a good way to measure that.

And that would be...

I'm not quite sure how to do it because we're not audio engineers ourselves, but there's an interesting challenge to

Could the system just report that the audio is not up to snuff to help tune the system and identify problems?

Because we are getting

results back, but are they,

I guess, how do we evaluate our own audio quality?

Not to compare models or anything like that, but just are these results likely good ones just for our own

our own use.

Like, yeah, if we're getting 90% success, I mean, that would be amazing.

I think we're probably more in the 60-70% range for getting quality transcriptions back.

Okay.

Yeah.

Yeah.

So, okay.

Yeah.

So this is, are you happy with this ordering?

Would you say?

I haven't done that yet.

Yeah.

Okay, awesome.

I realize, I don't know if I actually pressed record on this

I've recorded it on the phone youn can see the challenge actually yeah recording features oh did I join with the wrong thing?

Okay

right

maybe I loved it if.

I click the more three dots in the bottom right it's got a record.

Option for me oh really?

Okay would you be able to press that?

I can guess I don't know where.

It'S gonna end up Otherwise, I can just- oh, okay.

It says recording features can't be started.

Until- I'm just gonna leave and come back quickly.

Sorry, Ryan.

Okay, no problem.

I'm just gonna leave.

Okay, sorry about that.

I think I got logged out of Zoom.

That's why.

Okay.

Recording in progress.

There we go.

That was the smoothest one I've done.

I feel like usually it's way more complex.

It'll be like, you've got to redownload Zoom and, like, you got to do all this.

Okay.

Amazing.

Thank you, Ryan.

Thanks.

To your patience.

Okay.

So

this one is like gain.

So it's like kind of more like what you would get is like kind of so

it's kind of like what would be a benefit to you of like any changes.

So some of this is it's like obviously largely based around the problem.

So kind of want to do the same thing.

Just like get your reaction to this is like, is this the top three thing that you would want out of a tool kind of thing?

So like you've got like a Here you've got a production faithful eval system with clear metrics, conversation should feel instant and natural.

Teams want to swap

automatic speech recognition.

Yeah, I think so.

Maybe start the list at number three is we're about to start seeing that.

Most of what we're doing right now is pilots where we're just capturing the data that nobody is

No one's analyzing the historic data.

So nobody has any idea if they want to, you know, what model is being used or if they want to swap it out or things like that.

So that's going to be coming and I think it's...

I mean, yeah, and getting back to how do we evaluate these things?

Hey, let's switch you over to this model because it's better for your particular radios or if it's the after action analysis of the Bayes transcript.

You can take a vanilla model, but really it's going to be a whole pile of prompts and

potentially adding some agent capability, some MTP to go and match that up with some other things.

So I think we're going to get into that.

But it's not there yet.

I just know it's kind of looping.

I think there's a fair bit of latency going off the list.

Number two, there is a fair bit of latency with what we have, so we know it doesn't feel instant and natural.

So far, that's...

It hasn't been a big deal.

People talk about it, and then they...

It's kind of like, I guess, the nuanced point that we can We can sort of explain it away at the moment, but

if we're starting to compete with

non-walkie-talkie based ones,

I think that,

I don't know, the difference will show at that point.

Yeah, yeah.

Yeah.

Yeah.

Protection faithful eval system.

Yeah, we difficult for us to test here because we're like, nobody uses walkie-talkies other than to like just do the testing 1-2-3 kind of speak into the walkie-talkie to make sure it works.

So we lack

a real store environment.

If like our manufacturers were on radios or something all the time, we would just monitor our own systems and yeah, we lack that.

So.

Difficult to see how it would.

Yeah.

So, yeah, in a test environment, it's pretty hard to replicate.

Would you say none of these essentially would be in the top three things that you would want, basically?

Yeah.

Not.

Not yet, I think.

Yeah.

We don't have enough in production that we sort of worry too much about.

The regression, I mean, the changes are small.

It's a small enough number of deployments that

we can kind of monitor them.

And they're friendly customers that we're talking to all the time and we're working with them.

So they're...

Yeah, I think that one's not a problem yet.

I think if we, once we get past like 20 of these units in the wild,

that's gonna be too much to kind of have our finger on the pulse of.

So,

I don't know, three months, like for sure, there's gonna be

a problem there if we're gonna need metrics coming out of each of those systems to tell us if there's a problem.

Yeah, but not yet.

Not yet.

You've got other problems right now.

Is it Ryan, would you think it'd be fair to say I'm just like, I don't know, it's hard for me to suggest these without leading you down a path and I don't want to, but also know it's hard to just pull these out of your head.

Do you reckon you would have a go at saying what your top three games are or would you prefer me to just guess at them and then you can say yeah, no, I can't go.

One of them is going to be, you know, manufacturing side of things just because that is a pain point that we are still, every unit that we've produced,

slightly unique, little unicorn, there's something that gets tweaked so it's not quite streamlined just we just scan another one out.

It's close, but it's

not quite there yet.

And there's, you know, what's the enclosure?

Hey, we can't get this one this week, so let's use this other one that's almost the same.

So it's not quite.

Yeah, it's not assembly line just yet.

So

I think we're close.

We just need to accept that this is what the unit is going to cost to build and apply the expensive parts that we know we can get.

That one would let us just crank these things out and get them into the wild.

So it would be just be able to produce, to be able to manufacture consistently.

Produce this scale, yeah.

I know nothing, but whenever I've heard like Elon Musk talk about like manufacturing, I think he didn't always say it's like building it is easy, it's just doing it at scale, which is like the hard bit.

Yeah, I can imagine.

Yeah, I can build one of these by hand, but I can't build 10.

Yeah.

Okay, so that's number one thing that you would want.

So like some magic thing that would just be able to help you manufacture.

Yeah, yeah.

The number two would be, and I split one and two could swap, but it, you know, the silver bullet on just audio quality over a radio.

A specific model that is tuned to radios, which potentially is something that we could build, but it's trained on static field radios.

It's trained on

the domain of using a radio in a retail environment

where you're talking about aisles and stock and shoplifters and that's what the conversations are about.

They're, you know, the models that are out there that we're working at are amazing, but they're, you know, they're, they're tuned to work for everybody in every situation and not our super specific situation.

Yeah.

Yeah.

And I think there's, you know, there's probably lots of knobs and dials that we can turn on individual models.

We just haven't had the time to focus on that to see if it's,

you know, can we get what we need out of the Azure Speech to Text if we just give it this extra bit of context or, you know, switch off the Nigella Azure Speech to Text and turn on, you know, OpenAI, talk to it directly and Now where we can give it more information upfront, that's certainly something that we ought to be exploring.

Yeah.

Yeah, yeah, yeah.

And then actually something that's like what you guys do is let you swap between all these models and you can throw the same inputs at them and just see

what gets the better results, that can give you guys support.

Super wide

assortment of models.

Like, we don't have that, so we don't know what we're missing.

Yeah.

Yeah.

If we want to sink up several days to get Doogle, Gemini working, or.

Yeah,

you know, we can, but we've gotta.

We've got to go build that into the product just to find out if it's any good.

Sure.

It's hard to.

Do quick tests.

Yeah.

Yeah.

Okay, awesome.

Do you have a number three, would you say?

Number three would be just a, we'd kind of lack a good management system for rolling out new customers, like management of the SaaS infrastructure.

So that it's a bit manual.

Yeah, it's not hard.

It's just kind of a human's got to do it.

It's error prone.

We don't have a.

We don't have a system to just press the button and out pops a new store location ready to go.

So, yeah, the onboarding

side of things.

Yeah, I actually have heard that from other people, actually.

Yeah.

Yeah, sometimes it's pretty easy.

If someone's just doing a trial, we'll just add them to our multi-tenant environment and drop a config file in place and it works.

It's manual, but it's pretty easy.

We've just gone through, it's not quite ready for primetime, but

We've got a listing, it's private at the moment, but like an Azure listing where you can just add our appliance as an item and then it spins up all the resources you need out of your own little environment.

That's ultimately what I think is the future of spinning up our set of resources for a larger organization like a Target or a Walmart or somebody.

That needs their own dedicated pool of resources.

So that's at least a button click to get things up and running.

So we're taking some steps there.

Yeah, that's that.

How long does it typically take?

I mean, I guess that you said it varies, but, like, so on board.

Oh, so, I mean, with this, this new button press, it's.

Probably five minutes.

Okay.

To create all the resources and then they're running live.

Yeah, so it's not going to be like a customer could self serve to get the resources.

There's no particular urgency to have it be instant.

Yeah.

The type of sale and customer or after is They're not sitting on the website and hoping to press the button there to try to test out an instance.

Yeah, yeah.

Because there's the hardware component to it, it's going to be a

slow process.

Okay.

And then is there a lot of onboarding around other things?

At the moment, since we're not giving people access to the underlying data, we're kind of gatekeeping the data that gets captured and how you go about analyzing it.

That all goes to us.

Financially, it needs to not do that and kind of give people tools to access their own data.

So at that point, that would become part of their onboarding experience.

Not only do you press the button to create these resources, but

your Microsoft account is how you get into the system and it gives you access to dashboards and the transcripts and all that stuff.

Once we've built that, then that'll be kind of an additional onboarding piece.

But the key, I think, would be to have it sort of be automatic.

It's tied into the whole,

whatever the user was that created the resources in Azure, they become the administrator and they can manage their own users themselves.

We don't have to do all that stuff.

Okay.

Yeah.

Yeah.

That makes sense.

Funnily enough, that takes us way down the path of vendor login,

once we get all those things.

Yeah.

It'd be very difficult to spin that off into another environment.

That's always a trade-off, isn't it?

Okay, we could try and maybe just quickly go and touch on these and we'll skip the change of environment.

We've got four minutes, so conscious of the time.

Okay, jobs to be done.

How do these look to you as top three?

Actually, the VAD one, definitely.

There's certainly some, like, some background noise that's causing some, like, it's nothing.

It's even just silence, but it, like, causes the system to spend money doing translations of nothing.

So that one is.

Is definitely top of mind.

Number two is probably not us,

or at least not at the moment, but

like a system health, maybe it's kind of related, but like being able to have a picture of the system health, are we getting reasonable returns out of this store today?

Yes or no, and that alerts us if something's up.

And then, yeah, for number three, I think our version of that would just be

experiment with more providers

and see if there's a winner that's just the best for radios or If there's no clear winner, now we need to

invest in developing our own models or our own training sets and

see what we can get out of those existing models if we train a custom one.

Okay, so would you say for you, in terms of voice AI, your job's to be done?

What would you say?

Do you think you could have a stab at having a top three?

Just thinking.

Jobs to be yeah, I think the actually the sort of the van interesting little comment there, there is an immediate problem we need to solve is just improve the pad.

Because that's costing money, creating confusion, because there's these inaudible results.

Basically it just didn't throw out the nonsense that would have an immediate impact if we could do that.

Would you say that's your number one job to be done?

I think that's number one right now.

Yeah.

Yeah, because it's kind of eroded confidence in the system of all these, like, what was this untranscribed chunk of audio that was one second long?

And it was nothing.

Super helpful.

And then two would be experiment with providers to find the best model for radio use cases.

Yeah.

Yeah, interesting.

And it sounds like that is almost like a subset of number one.

Kind of, I guess.

Yeah.

Okay, Ryan, we're at pretty much that time, so I just wanna say thank you very much.

This is extremely, extremely helpful.

Yes, but if you, I don't know if you want to

send me the, what was missed on the list, I'd be happy to fill in.

Oh, yeah.

If you, it just changes in environment.

I can send it to you.

Yeah, absolutely.

If you have time, but no, no pressure as well.

This is all, it doesn't need to be, like, completely complete, so.

And I think this is the least important one for us right now, so this is extremely helpful.

So no, no pressure.

And hopefully, hopefully it's useful and.

Yeah, anything else I can do?

I'm happy to hop on a call.

Thank you.

Are you still happy to continue doing these once a month?

We're going to change this.

Yeah, yeah, okay, amazing.

And if you want, like if there's a little homework assignment or something to do ahead of time, I'd be happy to.

Thank you.

Yeah, we're prepped.

If that works for you, I'm happy to.

We're honestly already so grateful, so just don't want to take up any more of your time.

So thank you very much.

No worries.

Okay, thanks.

Have a good evening.

Thank you, Ryan.

Have a good evening.

See you later.

Bye.