In a previous blog, we looked at how end-of-day chat transcripts can be processed into structured quote data. A blotter populated not from a system of record but from the conversations that happened before anything was formally recorded. The value proposition there was recovery. Data that had been lost to history becomes usable.
This blog covers the next step.
Every RFQ, pricing conversation and negotiation passes through chat. The problem has never been that the data does not exist. The problem is that it exists in a format nobody can act on at speed.
A PM or trader sends the same RFQ to ten counterparties across separate chat threads, waits for responses then manually assembles a picture of where the market is. The intelligence sits inside the conversation. The work sits in extracting it.
That extraction is what the ipushpull inference engine automates.
On the buy side, an RFQ is typically issued to multiple counterparties across different chat rooms. Responses arrive in fragments. Different threads. Different timestamps. Different formats.
Constructing a comparable market view requires manual work. A trader or PM must read each response, extract the relevant details, and then reconcile them into a usable comparison. That process takes time, and the market rarely waits for it. By the time the picture is complete, conditions may already have shifted, and the price may need to be refreshed.
The underlying issue is structural. Intent arrives as unstructured text, and converting that intent into a structured, actionable record requires human effort. The effort is rarely justified in the moment, so the data is never captured. It disappears into archives, never queried and never used. The intelligence that sat inside those conversations disappears with it.
The flow works as follows:
Identify. A chat thread containing an RFQ is selected. The system detects the relevant messages, the initial request, the pricing responses and confirmations, and groups them as a single contextual record.
Infer. Structured fields are extracted. Instrument, size, price, side, cut, counterparty and timestamps. A human confirmation step sits between inference and record creation.
Populate. The extracted data flows into a live Quote Hub in real time. The desk can see what has been captured as it happens rather than reconstructing activity at the end of the day.
Aggregate. Where the same RFQ has been sent to multiple counterparties, responses from separate chat rooms are pulled into a single consolidated view. Best price becomes immediately visible without manual consolidation.
Every inferred field remains traceable to its source message. Every confirmation is logged.
Beyond real-time capture, one capability has consistently generated the most interest from buy-side teams: similar RFQ recall.
Before issuing an RFQ, a PM can run a query against the firm's structured chat history. The system surfaces previously processed RFQs that match the current request. Comparable instrument type, similar tenor and size.
The result shows which counterparties responded, how quickly they replied and where pricing landed.
This is structured recall of real market interactions that previously sat buried in chat archives. The information becomes available at the point of decision. A desk can assess which counterparties are most likely to respond quickly or competitively before the RFQ is even sent.
Over time, the dataset compounds. Each processed RFQ expands the historical record. The longer the system runs, the more signal the desk accumulates.
The inference engine is not a standalone AI tool. It sits on top of ipushpull's capture and orchestration layer, which handles ingestion, interpretation, standardisation and delivery across chat, Excel and downstream systems.
This architecture matters in regulated environments. Every output is explainable because the source message, the inferred fields and the human confirmation are all recorded as discrete linked events. There is no opaque layer between the chat message and the structured record.
Data remains inside the governed environment throughout the process.
Auto inference mode is on the roadmap. The system will be able to infer continuously without manual selection once it has been trained on sufficient classifications.
The current model requires a human confirmation step before any record is created. That constraint is deliberate. It reflects how trading desks operate under surveillance and record-keeping obligations.
The technical foundation already exists. The remaining shift is organisational. Desks must decide whether their chat history remains an archive or becomes a structured data asset.
To see the demo or discuss how this applies to your workflows, contact us.
Related: Turning Forgotten Chat into Tomorrow's Data | Chat Data Mining