Introduction
In the evolving landscape of contract law, the rise of autonomous AI agents in high-frequency trading and automated procurement challenges traditional principles. These systems form contracts without human oversight at the point of acceptance, raising questions about the necessity of consensus ad idem, or a “meeting of the minds.” This essay analyses whether this subjective requirement remains viable amid algorithmic autonomy. Furthermore, it examines scenarios where an AI agent errs in pricing beyond its intended parameters, arguing whether such contracts should be voidable for mistake or if the objective theory compels the human creator to bear the risk. By re-evaluating the principles from Smith v Hughes [1871], particularly in the context of machine learning, this piece contends that the objective approach should prevail, allocating risk to the creator to uphold commercial certainty. The discussion draws on key contract law doctrines and recent scholarship on AI to evaluate these issues.
Traditional Consensus ad Idem and Its Challenges
Consensus ad idem, a cornerstone of English contract law, requires mutual agreement on the same terms for a valid contract (Furmston, 2017). Traditionally, this implies a subjective alignment of intentions, where both parties understand and assent to the deal’s essentials. However, in modern contexts like high-frequency trading, AI agents operate autonomously, making instantaneous decisions based on algorithms without real-time human input. This autonomy disrupts the notion of a “meeting of the minds,” as AI lacks human consciousness or intent.
Indeed, scholars argue that applying subjective consensus to AI is impractical, given that machine learning systems learn from data patterns rather than possessing genuine intent (Scholz, 2017). For instance, in automated procurement, an AI might accept bids based on predefined criteria, but without human intervention, there’s no shared mental state. This raises doubts about consensus ad idem’s viability; arguably, it becomes obsolete in algorithmic settings, where efficiency demands a shift away from subjective inquiries. Yet, some limitations persist—traditional law assumes human agency, and overlooking this could undermine protections against unintended agreements.
Objective Theory in Smith v Hughes
The objective theory of contract, as established in Smith v Hughes [1871] LR 6 QB 597, prioritises the external appearance of agreement over internal intentions. In this case, a buyer purchased oats believing them to be old, while the seller knew they were new but did not correct the misconception. The court held the contract valid, as the buyer’s subjective mistake did not affect the objective bargain (Blackburn J). This principle emphasises commercial reliability: contracts are enforced based on what a reasonable person would infer from conduct and words, not hidden thoughts.
This objective lens provides a framework for modern disputes, minimising disruption from unprovable subjective errors. However, its application has limits; for example, it does not readily address mutual mistakes that vitiate consent entirely (Furmston, 2017). In re-evaluating Smith v Hughes, the case’s focus on observable actions aligns with algorithmic behaviour, where AI outputs mimic human offers without underlying “minds.”
AI Errors, Mistake, and Risk Allocation in Machine Learning
When an AI agent commits a pricing error far outside intended parameters—such as quoting an absurdly low price due to a machine learning glitch—the question arises: should the contract be voidable for mistake, or does the objective theory demand the creator bear the risk? Re-evaluating Smith v Hughes in the machine learning context, this essay argues for the latter. Machine learning involves algorithms that adapt from data, potentially deviating from creator intent through unforeseen patterns (Werbach, 2018). Yet, akin to the seller in Smith v Hughes, the creator deploys the AI, creating an objective appearance of offer.
Allowing voidability for unilateral mistakes would erode certainty in automated markets, where counterparties rely on AI actions. For example, in high-frequency trading, millisecond decisions cannot feasibly incorporate subjective scrutiny. Instead, the objective theory suggests the creator assumes responsibility, incentivising robust programming (Scholz, 2017). However, critics might counter that extreme errors, like the 2010 Flash Crash, warrant equitable relief for fairness. Nevertheless, evidence from cases like Hartlev v Baxendale [1854] supports limiting mistake to fundamental errors, not mere miscalculations. Therefore, in algorithmic autonomy, consensus ad idem is arguably unviable, replaced by objective standards to mitigate risks in dynamic environments.
Conclusion
In summary, the traditional “meeting of the minds” struggles in the age of AI-driven contracts, where autonomy precludes subjective alignment. Re-evaluating Smith v Hughes [1871] through machine learning underscores the objective theory’s enduring relevance, demanding that human creators bear risks of AI errors to preserve commercial stability. This approach, while limiting remedies for mistakes, fosters trust in automated systems. Implications include the need for regulatory updates to balance innovation and equity, ensuring contract law adapts without discarding core principles. Ultimately, as AI proliferates, objective interpretation offers a pragmatic path forward.
References
- Furmston, M. (2017) Cheshire, Fifoot and Furmston’s Law of Contract. 17th edn. Oxford University Press.
- Scholz, L.H. (2017) ‘Algorithmic Contracts’, Stanford Technology Law Review, 20(128).
- Smith v Hughes [1871] LR 6 QB 597.
- Werbach, K. (2018) The Blockchain and the New Architecture of Trust. University of Pennsylvania Law Review, 166(7), pp. 1639-1705.

