Loading…

AI attention span so good it shouldn’t be legal

We have another two-for-one special this week, with two more interviews from the floor of re:Invent.

Article hero image
Credit: Alexandra Francis

First, Ryan welcomes Pathway CEO Zuzanna Stamirowska and CCO Victor Szczerba to dive into their development of Baby Dragon Hatchling, the first post-transformer frontier model, from how continual learning and memory will transform AI to the real-world use cases for longer LLM attention span.

In the second part of this episode, Ryan is joined by Rowan McNamee, co-founder and COO of Mary Technology, to discuss bringing AI into the carefully governed world of litigation and how LLMs are helping lawyers manage and interpret the vast amounts of legal evidence that pass across their desks every day.

Pathway is building the first post-transformer frontier model that solves for attention span and continual learning.

Mary Technology is an AI for attorneys that turns evidentiary documents into structured, easy-to-review facts.

Connect with Zuzanna on LinkedIn and Twitter.

Reach out to Victor at his email: victor@pathway.com

Connect with Rowan on LinkedIn.

We want to know what you're using to upskill and learn in the age of AI. Take this five minute survey on learning and AI to have your voice heard in our next Stack Overflow Knows Pulse Survey.

TRANSCRIPT

[Intro Music]

Ryan Donovan: Hello, and welcome to the Stack Overflow Podcast, a place to talk all things software and technology. I'm your host, Ryan Donovan, and today we have two recordings back from AWS re;Invent, recorded on the floor. We have interviews with Pathway and Mary Technologies, so please enjoy.

Ryan Donovan: I'm here at re;Invent talking about models other than Transformers, the next level, and I'm here with Zuzanna Stamirowska, CEO of Pathway, and Victor Szczerba, Chief Commercial Officer of Pathway. So, welcome to the show. Can you tell me a little bit about what Pathway is doing?

Zuzanna Stamirowska: Yeah, absolutely. Hey, thank you very much for having us.

Ryan Donovan: Of course.

Zuzanna Stamirowska: So, Pathway is building the first post-transformer frontier model, which resolves the fundamental problem of current LLMs, which is the question of memory. Models that we are actually training right now will be capable of continual learning, capable of long-term reasoning, and of adaptation, imagine life AI. So, this is what we're building, and this is really innovation which is very deep. So, we took the first principles view on how intelligence works, and how transformer works, actually; and rolled back a bit in history and rethought all of it from the first principles view; and then, looked a little bit at the brain, how it works, and found a link between transformers and the brain; and we published bits of what we were doing already. So we published the BDH Dragon Hatchling Architecture, which was trending on Hanging Face in October. And yes, this is the beginning of the post-transformer era.

Ryan Donovan: The model, is it still a neural net? Would it be familiar if somebody looked at the weights and biases of a transformer model? Would it be familiar, or is it something completely different?

Zuzanna Stamirowska: Yes and no.

Ryan Donovan: Yeah.

Zuzanna Stamirowska: So, first of all, maybe a little bit of background. Right now, almost all of the models that we see out there feel the same because they're the same. They're based transformer, and there was a brute force approach. We put more data, more compute, more layers, more everything, and then this will just get better. We've seen there won't be enough energy to actually power all the inferences, and we see that LLMs, specifically LLMs seen as just scaling with more data, et cetera, won't get us to AGI. So, right now, it's even open AI researchers saying that openly. And then, in terms of what we've done, we looked a little bit at– we even rethought attention. So, yes, it's very different. The way our model works is way closer to the brain. So, [the] brain is a beautiful structure where you have neurons. Neurons can be viewed as small computational entities, simply this, small computational entities. And neurons are connected between each other forming a network of connections. It's a physical system with local activations. So, we have a brain, which is pretty large. So, specifically, we have 100 billion neurons and a thousand trillion synapse connections, which are still packed in a very efficient structure, because our heads [have] to be light. [It] has to fit into our frame. The head has to be light. We have to be able to walk on two feet and not fall over. The brain is super efficient. It is capable of generalizing over time, it is capable of lifelong learning. We're born, we learn, we taste soap once, and we know we shouldn't be eating soap. We don't need to see all the soap data or whatever, or taste soap thousands of times before we understand that soap is not good for you. The brain is a physical system that exists, that has all the required properties that we would love to have in an AGI. So, what we did is we kinda looked at the brain a little bit. Scientists were looking at the planes, and wanted to make transformer look at, 'okay, how can we get from transformer, what is missing in the transformer to get us closer to the brain?' So, we found that link, and the model works, and the architecture works in such way. We have neurons, we have synapses. When there is a new token of information that arrives—it may arrive at any time—we have neurons that fire up. So, we have one neuron, for example, that fires up and then sends message to its neighbors, to whom it's connected by wire, right? By the synapse. So, it passes the message. Let's say a certain threshold of importance is reached for the neighbor, and the neighbor fires up as well. But this is a basic principle of something that's called Hagen learning. It's actually a very simple brain model, really. But these interactions are local. So, this means its range is a very simple rule of I have a message, I send you a letter, if you care enough, you fire up as well. If you fire up the synapse, the connection between us becomes stronger. Since the connection becomes stronger, this lets implications become stronger, as well. Ultimately, it gives us intrinsic memory. So, we actually do have memory inside of the architecture train, like the architecture itself, and we have local dynamics. And this locality gives a lot of nice features. So, one is the fact that it's extremely computationally efficient, because we don't fire up huge matrices, but we literally just have it apply all the principles of distribute computing. It distributes nicely because you can chart easily. So, we can distribute it differently than with a transformer. [When] you have this energy efficiency, memory is a give-in.

Ryan Donovan: It sounds like– we talked about the synapses and nodes. How do you represent that in a sort of storage computation manner? Like with a neural net, it is just a series of floats, and a thousand things in array with billions and billions of parameters. Is this a sort of vector math, or is it the same sort of array?

Zuzanna Stamirowska: Yeah, so for us, the equivalent of parameters really are the synapses. We can factor a lot of them, but they're sparse. So, the structure that we get, as opposed to the matrix that you have, we have a sparse structure, which is defined by the synapses. But then for specifically what we do in terms of how we look at vectors, we actually have a different interpretation. We look only at positive and sparse activations, and this is something that is one of the favorite topics of our CSO. It's like our probability spaces are kinda different. You may imagine like a code, in fact, with only positive vectors that are sparse, so you can't even encode everything. But this means that we don't have negative vectors, we have only positive. So, we assume that you cannot encode that same thing one and encode positive and negative. One of the examples would give is that, okay, if you need to repaint, you tell a guy who's renovating your house that he should do his job well by showing him how a job [is] badly done. This doesn't give him information [on] how to do it well. So, we only have positive, and this positive, actually, the activations actually work on strengthening of the synapses. And this is just a positive message, in a way, that you sent within the network. Operationally, however, it all works on GPU. We need it to do some tricks. So, ideally, we just deal with sparsity and have implementation with just sparse vectors. But we applied some kind of math to make it fit into GPU. Such as, I mean, we have this sparsity, but somehow hidden, and it still runs on H100s, and actually, in terms of learning capability, exceeds GPT-2. So, re; the architecture of the transformer, versus the classical one. And what we're mostly looking at, in fact, is reasoning. So, instead of focusing on LLMs understood as, okay, language models, really our goal is to get to reasoning models.

Ryan Donovan: And when you say it's constantly learning and updating, the model itself is changing with every action, is that right? What sort of computational difficulty [could it be]– because it seems like with the classical transformer model, you hold the whole thing in memory. Is there a sort of computational load, or is it more efficient because of the architecture of the model?

Zuzanna Stamirowska: So, it's more efficient by a lot. So, you have memory which is very close. And then, on the chip side, you actually literally keep it in memory, and you keep your state in memory on the synapses.

Ryan Donovan: The model and the state, are they different things? The model is the state?

Zuzanna Stamirowska: Your synapses, that are just a trillion, are your state, pretty much, and you keep it in memory. For the geeks out there, it's somehow close to the concept of Jeff Hinton, which is called the fast weights. Then, of course, I mean, there is, you know, the question of slower weights that you wanna have in long-term memory, but this question is, in fact, way easier than having this kind of synaptic plasticity mechanism at the first stage.

Ryan Donovan: With the transform model, we found, obviously, it's deterministic, so it's got hallucinations. Does this help address the hallucination problem?

Zuzanna Stamirowska: It does. Disclaimer, on every level, there is some sort of compression everywhere, but with reasoning models, and especially with the fact that we get kinda the scale-free structure, we expect models based on PTH to actually generalize better across heightened scales not observed in the training data. And especially, do you maybe– if you know the meters benchmark on how long can a model stay focused a on task – it's really the equivalent of how long a task would've taken a human to accomplish, and how far are we in this, let's say, attentions per models. For GPT-5, it's two hours and 70 minutes, with ~50% success rates. So, I know the mental day or the ground hod day lasts two hours and 17 minutes, and then a hallucination is very likely, 'cause you fall off course.

Victor Szczerba: Yeah.

Zuzanna Stamirowska: So yes, if you can infuse time—a kind of time sequence—into the model with memory, you actually stay focused on a task for way longer. So, the likelihood of hallucinations goes down. And people hallucinate, as well, and we also compress, and we maybe sometimes [can] be bad at recall, or whatever.

Ryan Donovan: Are there applications or other emergent properties you're seeing that come from this particular model architecture?

Zuzanna Stamirowska: We're mostly interested in reasoning and puzzles. So, the holy grail is a generalization, right? We go after challenges over time, but also generalization from small data. This is something that's very important for—we're at re;Invent—for Enterprise, right? I have a lot of valuable data, and having memory that's is intrinsic and contextualized to the user creates stickiness, and actually, huge value for enterprise customers.

Ryan Donovan: Has this led to any increased or other use cases, emergent abilities?

Zuzanna Stamirowska: Generally speaking, we're looking at generalization, right? Generalization over time, and then generalization also from small data. This is something that current LLMs are not really capable of doing. They're mostly pattern matching and not extrapolating. And because actually, reasoning seems to be pretty similar to finding pathways—I've even described in a paper how reasoning really works, that this is like a series of transformations on the pathways, really. Then, we believe that the biggest promise here is in long-term reasoning and solving complex problems.

Victor Szczerba: I would say, as far as those areas where the increased functionality is, number one is that long attention span. So, instead of being able to, let's say, go in and use a customer service bot that has the attention span of three minutes, think about something like an end-of-quarter process for a company that might actually be 10 departments, and eight weeks long and involve lots and lots of folks, and that's bucket number one. Bucket number two are really those things that what Zuzanna was talking about learning from. Tasting soap once.

Ryan Donovan: Yeah. Yeah.

Victor Szczerba: So, the reason why fine– tuning actually doesn't work in a lot of enterprises is they just don't have enough data to counterbalance the weights inside the market–

Ryan Donovan: Terms that exist and trend.

Victor Szczerba: Exactly. You would almost have to try soap, I don't know, 10,000 times before, and that's how the large LLMs work, right? They need lots and lots of data, but you might actually only be doing a redesigning a new platform maybe every 15 or 20 years. There's a lot of corporate memory that went into the process, and if you have a long-term employee that might have gone through that process, they might remember it. Our model could actually go through and learn that process from a very thin data use case. And then the third bucket is observability. So, most LLMs are black boxes in nature. Those synapses that Zuzanna was talking about, you can actually observe what's going on, and so you can actually go in and mouse for highly regulated industries to see exactly what's going on inside the model.

Ryan Donovan: So, the way that transformers find meaning is cosine distance. It almost sounds like you're talking about pathing through the memory.

Zuzanna Stamirowska: There is a notion of structure in the fact that structure matters. It's silly to say, 'yes, structure matters. Imagine this.' And nature was very smart at [INAUDIBLE] and smart at finding very efficient structures. We find similar already across different synapses, but at the end of the day it's kind–

Ryan Donovan: It's a network problem.

Zuzanna Stamirowska: Somewhat similar. You have nodes and edges. You have nodes and edges, and systems in nature just want to be somehow efficient and resilient. You have these two forces that try to find a trade-off, but not prove some sort of global rule. They have local interactions, and interacting, somehow, we give rise to something larger, and this is the same thing that humans do.

Ryan Donovan: So, with the memory built into the model, does this obviate RAG structures, or massive system context prompts that people put in?

Zuzanna Stamirowska: Yeah, exactly. Because your context is only limited by the size of your hardware. Your context sits on the synapses, and it's huge, as I just put it a number of times. So, yes. There's no problem with context windows. It's technically not infinite, it's just limited by the size of the head, knowing that the structure permits us to fit so much into it.

Ryan Donovan: Yeah.

Zuzanna Stamirowska: And plus, it's opens—just for geeks—such amazing things that we can do. And actually, in the paper, we glue models together. For people, it's hard to glue two brains together. It turns out that because our model kind of grows only in one dimension, it charts very well, and also it allows us to take, for example, two models trained in two different languages and literally just glue them together, and have them immediately, even without any runs of training, mix up two languages. And then with some training actually, we'll have connections created among them, and then you literally glue models like Lego blocks. We have synapses that tune out when they hear a message, ' okay, I don't care anymore,' and we actually observe it. We see their activity just going down. So, it's a bit like having a CCTV inside of the brain instead of building a huge MRI machine to try to scan the transformer.

Ryan Donovan: It's super interesting. You've already said it solves the observability problem. Does this mean that you can grow the size of a model over time? Can you just keep encompassing more and more stuff, doing more models together? Is there a point where it gets to the issues of context rot or model collapse, where it's unable to find paths?

Zuzanna Stamirowska: So, the kind of scientific answer will be no, because of the state-free structure property of the structure. So, if you can know how a broccoli looks, or what a fractal is roughly–

Ryan Donovan: Yeah.

Zuzanna Stamirowska: This is what we have there. This is how we know the dynamic of local interactions. So, if we zoom in, let's say, at the cluster, it'll behave in a similar way. This structure has properties that will work at any scale. So, this is predictable. So, in principle, no issue. Then the question is, okay, when you wanna deploy it, do you wanna have a big one? And then you, of course, for a specific function, you just fire up a little bit of it. Or do you have small ones that are deployed all over the place? So, this is something that we will honestly see with customers once we deploy it. But there is this possibility. So some things that we will be looking to in the future will be, for example, having your baby dragon trained, deployed within your legal department, be glued with your financial department, for example. And then creating, of course, this higher-level judgment of what should be done. One of the examples I like to give is our CSO, Adrian, he's a quantum physicist, computer scientist, and a mathematician in one. I had a PG 20. But the thing is that if we hired one mathematician, one quantum physicist, and computer scientist, we wouldn't have done BDH. That's the value of actually having everything in one connected–

Ryan Donovan: Is there a limit to how many synapses can be connected through single node?

Zuzanna Stamirowska: Yes, in a sense, because once you get to the structure of the... so, let's say you could have complete graph, but why? And nature wouldn't usually allow it because it's stupid.

Ryan Donovan: It would be meaning.

Zuzanna Stamirowska: Yeah. So, the point is that we also don't give the structure. The structure is not imposed. It emerges naturally from the very local simple rules of, 'okay, I got the message.' And this is at a reconnect, and the efficient structure, and we actually see it has a long tail distribution of degrees. It's just kind of property that we observe. It's not something that we know, that we define, but generally speaking, you don't wanna have a complete graph. It would be very weird if we were to observe a complete craft.

Ryan Donovan: You talked about some of the use cases. Are there use cases you're finding that are particularly suited to this architecture?

Victor Szczerba: We're hunting in industries right now and talking of people that are bringing their ideas to us. So, one of those examples I would say is medical record review for insurance. That is a pretty complex process, right? Very exception driven. Cutting edge right now is rl. Still a whole bunch of these things get done with human reviewers. We could actually go in and take that whole process and take a look at it and say, 'okay, yes, you might have had this procedure, but then you had some kind of complication. So, it's actually two together. So, this is actually really normal, and it would've been rejected through a regular process. So, that's an example just in a highly regulated industry. The way that we can define us is not necessarily by how the industry defines LLMs, right? The size of the model, right? So, the number of parameters is irrelevant for us. The context window is irrelevant because we, in many lines, almost have an infinite context window. So–

Ryan Donovan: The model is the context window, right?

Victor Szczerba: That's right. Yeah. So, trying to put us in the same box against those parameters or whatever is a little bit–

Zuzanna Stamirowska: I believe that these approaches getting us closer to the brain are the faster way to AGI.

Ryan Donovan: There have been a lot of knowledge graphs, supergraph approaches tacked onto this. So, this is interesting.

Zuzanna Stamirowska: So, I'm Zuzanna Stamirowska, CEO and Co-founder of Pathway. You can reach out to me on LinkedIn or Twitter.

Victor Szczerba: And I'm Victor Szczerba. I'm the Chief Commercial Officer, and you can reach out to me at my email [which is] Victor@Pathway.com.

[Music Interlude]

Rowan McNamee: I'm Rowan McNamee. I'm the Co-founder of Mary Technology.

Ryan Donovan: Tell us a little bit about what Mary Technology does.

Rowan McNamee: We call Mary Technology, 'Ol Mary, a fact management system. Ultimately, what we help lawyers or litigators with the thousands or tens of thousands of pages of evidence that they have in a legal case or a legal dispute that they're managing. So, what we do is we take those thousands of pages, we extract all of the facts, whether they're irrelevant or relevant, as well as to organize those documents. We then start giving additional functionality for them to quickly understand what's important, what might be important, what's not. As well as run complicated questions, as well as organize their documents, which is actually a massive task as well. It's a combination of some of older machine learning and LLMs. Obviously, we leverage AWS and the best available models, some smaller models for other tasks. But yeah, it's a combination.

Ryan Donovan: Obviously, LLMs are non-deterministic kind of variable in the outputs, and the facts are less valuable.

Rowan McNamee: Yeah, that's right. That's a very common question for us because a fact is sometimes more difficult to define in law. Something might be alleged; some might be completely factual. Sometimes it might be incorrect, but it is there written in the evidence. We try not to provide any sort of legal interpretation of a fact, but if it is present in the evidence, it is an event, whether it is alleged, whether it is incorrect, or whether it is a matter of fact, we try to include absolutely everything and be quite objective in the way we pull out information and events.

Ryan Donovan: So, when you pull out those, do you then store those? Did you vectorize them, draw those in some sort of system for later retrieval?

Rowan McNamee: So, we do now. We didn't originally, so that was one of the interesting points of difference about our product is we are using LLMs to generate what we call a 'fact layer.' So, where a lot of legal tools will immediately vectorize the documents, store those embeddings in them so people can ask questions, we actually took a bit of a different approach; because the interesting thing about litigation is: sometimes you don't know what the right question is yet, and you need alternative answers. So, like I mentioned, we objectively pull out all of the facts and store them, but we now also vectorize the original documents, and actually vectorize that fact layer, as well. That way we can give a RAG system a bit more power to answer questions more thoroughly later on. We have multiple LLM systems that both extract facts on a first pass, then go to check against the original source to ensure that they are correct. Of course, we're not necessarily checking that they are correct in the context of that matter, but correct compared with the source. We try and give lawyers that responsibility to then interpret that fact from a legal context. We will do some things like suggest whether we think this is relevant, because we do understand the context. We explain why. We also, of course, give the lawyers the ability to interrogate facts further by viewing the source side by side in the UI. But like I said, most of the guardrails is about getting lawyers to the original source as quickly as possible, with some guardrails in place to compare with the source to ensure that there's no hallucinations, or other issues like that.

Ryan Donovan: How do you approach organization?

Rowan McNamee: So, one of the most obvious use cases that lawyers often have to deal with is, as part of discovery, they'll receive a PDF bundle. They might have 7,000 pages in it. It's got duplicates. It's just a single PDF that's got a number as the title. We can use some machine learning as well as LLMs to actually split up those documents so we understand where they start and end; we name them, summarize them, as well as expose to the lawyer where they existed in that original for traceability. We then give lawyers, again, the tools to understand which will be relevant, which they want to look at further. A blank page will be there, split out, let the lawyer know that was included as part of this discovery bundle, as well as de-duplicate, and things like that. But again, the difference with our product is it's about trying to give the lawyer everything, point them in the right direction, while still giving 'em confidence that they can go back and see everything that was included.

Ryan Donovan: We had a fairly famous case a while back where a lawyer did do ChatGPT to write a brief, and it was disbarred, I believe. What sort of indemnification do you provide for folks if, for some reason, the LLM gets it wrong?

Rowan McNamee: Yeah, sure. We've had many of these famous cases in Australia, as well. Yeah. I think we're in the thirties now of lawyers who have used ChatGPT-generated cases. No, obviously, the onus is still very much on the lawyer to check their sources. That's why we make it very clear in the UI to always check the original source. Obviously, we are trying to limit errors to the greatest possible extent. I believe we're doing that as well, or better than anyone else, but it's not just what we tell them that it's their responsibility. The courts are now coming down and actually telling litigators, just saying, 'oh, a legal tool has messed up here,' that doesn't cut it, and even ignorance of that won't cut it. So, it's very much the lawyer's responsibility. Of course, we're trying to give them the best tools we can and limit errors to greatest possible extent.

Ryan Donovan: Yeah, if you have a research intern, you can't just say it, 'it was them.'

Rowan McNamee: That's right.

Ryan Donovan: What sort of other things do you build into the project to ensure that trust? Because this is a big thing, and [with] developers, we found in our survey data that the more they use LLMs, the less that they trust it.

Rowan McNamee: And that makes sense because every single error you encounter erodes your trust further. Yeah. So, it doesn't matter, even if 90% of the time it's getting it right. As soon as you find something that erodes that trust, it's just gonna make you less likely to use it.

Rowan McNamee: So, what we call these things, confidence tooling. So that. If there was some sort of error in there with the confidence tooling that we have, they should be able to immediately verify that and catch that. So, I'll give you some examples. One is a system we called Inferred Dates. In messy legal evidence, a legal factor, a legal event is not always immediately obvious. They don't always perfectly say, 'on 23rd January, I did this.' It may be a long, complicated chart and table with eight different dates of different medications on its face; each time that person received a medication, that's its own legal event, or its own fact. However, if that's very difficult to interpret, we'll actually put a flag and say,' we've inferred this date from the top of the chart and linked to that source, as well.' So again, when things are difficult, we try and point them in that direction. Another one we call Relevance Rationale. So, at the very start, we didn't provide any legal interpretation, any legal analysis. We just gave people the facts and said, 'it's your job now to interpret this.' However, with such an overwhelming amount of information, we do wanna point them in the right direction. So, we understand the matter context, or the case context. We can then look at the facts compared against that context and say, 'hey, this, we think this is a very high relevance,' but we won't just do that. We'll always provide the explanation that the LLM has given, compared to exactly why in the context, it's rated that. Conversely, the same for very low relevance – why we don't think this is relevant. But lawyers can then, again, filter and check. There are more examples, but I probably won't get to them all today. But that's what we call Confidence Tooling.

Ryan Donovan: Interesting. So, do you then apply this to, say, precedent?

Rowan McNamee: Not yet. The precedence that we're looking at in particular is actually the IP of the firms that we work with. So, that's something we will do in the future. So, if you have the factual basis for a case, lawyers are very interested in similar cases that they have in their own internal database that isn't necessarily public. They don't necessarily want other firms to have access to. There are many great tools for legal research, for understanding case law, understanding legislation, and research, generally. We are looking at partnering these sort of companies. They're great for those purposes. We've gotten a real focus on the facts. I think that relates closer to the factual basis for cases at your own firm. That's where we're interested in. But yeah, right now we are very deep focus on building the best possible fact layer and giving that to litigators that we can. It's a SaaS available via the browser. Firms log in using Microsoft or Google, and they access it directly from the browser. We do have integrations with other legal software, such as iManage, Smokeball, and other practice management, document management systems, but they actually access those documents from within the Mary web application.

Ryan Donovan: Are there security issues with that with it being a self-starter?

Rowan McNamee: Obviously, it's very important that you are using enterprise-grade, private models, and as well, data sovereignty. That's obviously the biggest one. So, in Australia we have to make sure we're using models that are in Australia and of the enterprise grade. Obviously, AWS helps massively with that. AWS partners [are] allowing us to do the same thing in the United States, and we'll, of course, have to do that when we go to Europe and other jurisdictions. That's very important. The other interesting thing about litigation is you can't train models using private customer case data. It's a very interesting challenge in this space. Whereas, of course, case law is public, legislation is public. That's why I think research is such a good use case in legal tech. But no, we have a different challenge. We absolutely cannot use customer data to start training models. So, we have to get very good at creating synthetic, fake data that we can then use. It's actually a very interesting machine learning and AI challenge.

Ryan Donovan: So, you talked about synthetic data. Are you simulating court cases on some level?

Rowan McNamee: Yeah. It's interesting. One way we do that is we take public judgements and then we simulate the evidence that might have been included in such a case. It's very difficult though because you have to maintain consistency across massive contexts. Yeah. A lot of our team works very hard on that, as well as, we don't wanna use PII and things like that, obviously. Even if it is a public case, we make fake cases. I'm a big SVU fan. Maybe that's another use case we haven't attacked yet. Maybe that'll come next.

Ryan Donovan: Is there anything you wanted to cover that I haven't touched on?

Rowan McNamee: Obviously, we are excited to be here in the United States as an Australian company. We have a great track record of Australian companies coming over to the US and doing really well. It's a great innovation hub down in Australia, so if you are interested in legal tech innovation, please reach out and speak to us. Let us know. We would love to show you what we're doing. As well, if you're an engineer interested in legal tech, we are certainly hiring. If you think this is an exciting problem, we'd love to hear from you. I'm Rowan McNamee, the Co-founder of Mary Technology. You can find us at marytechnology.com, or search Mary Technology on LinkedIn.

Add to the discussion

Login with your stackoverflow.com account to take part in the discussion.