StackA2A
devopslangchainjava

Ai Mocks

45

by mokksy

AI-Mocks is a Kotlin-based mock server toolkit that brings service virtualization to both HTTP/SSE and LLM APIs — think WireMock meets local OpenAI/Anthropic/Gemini/A2A testing, but with real streaming and Server-Sent Events support.

36 starsUpdated 2026-02-22MIT
Quality Score45/100
Community
35
Freshness
100
Official
30
Skills
10
Protocol
40
🔒 Security
20

Getting Started

1Clone the repository
$ git clone https://github.com/mokksy/ai-mocks
2Navigate to the project
$ cd ai-mocks
3Install dependencies
$ mvn install
4Run the agent
$ mvn exec:java

Or connect to the hosted endpoint: http://mokksy.dev/

README

Mokksy and AI-Mocks

Maven Central Kotlin CI GitHub branch status Codacy Badge Codacy Coverage codecov CodeRabbit Pull Request Reviews

Documentation API Reference Ask DeepWiki GitHub License Kotlin API Java

Mokksy and AI-Mocks are mock HTTP and LLM (Large Language Model) servers inspired by WireMock, with support for response streaming and Server-Side Events (SSE). They are designed to build, test, and mock LLM responses for development purposes.

Buy me a Coffee

Table of contents

Mokksy

Mokksy is a mock HTTP server built with Kotlin and Ktor. It addresses the limitations of WireMock by supporting true SSE and streaming responses, making it particularly useful for integration testing LLM clients.

Core Features

  • Flexibility to control server response directly via ApplicationCall object.
  • Built with Kotest Assertions.
  • Fluent modern Kotlin DSL API.
  • Support for simulating streamed responses and Server-Side Events (SSE) with delays between chunks.
  • Support for simulating response delays.

Example Usages

Responding with Predefined Responses

// given
val expectedResponse =
  // language=json
  """
    {
        "response": "Pong"
    }
    """.trimIndent()

mokksy.get {
  path = beEqual("/ping")
  containsHeader("Foo", "bar")
} respondsWith {
  body = expectedResponse
}

// when
val result = client.get("/ping") {
  headers.append("Foo", "bar")
}

// then
assertThat(result.status).isEqualTo(HttpStatusCode.OK)
assertThat(result.bodyAsText()).isEqualTo(expectedResponse)

POST Request

// given
val id = Random.nextInt()
val expectedResponse =
  // language=json
  """
    {
        "id": "$id",
        "name": "thing-$id"
    }
    """.trimIndent()

mokksy.post {
  path = beEqual("/things")
  bodyContains("\"$id\"")
} respondsWith {
  body = expectedResponse
  httpStatus = HttpStatusCode.Created
  headers {
    // type-safe builder style
    append(HttpHeaders.Location, "/things/$id")
  }
  headers += "Foo" to "bar" // list style
}

// when
val result =
  client.post("/things") {
    headers.append("Content-Type", "application/json")
    setBody(
      // language=json
      """
            {
                "id": "$id"
            }
            """.trimIndent(),
    )
  }

// then
assertThat(result.status).isEqualTo(HttpStatusCode.Created)
assertThat(result.bodyAsText()).isEqualTo(expectedResponse)
assertThat(result.headers["Location"]).isEqualTo("/things/$id")
assertThat(result.headers["Foo"]).isEqualTo("bar")

Server-Side Events (SSE) Response

Server-Side Events (SSE) is a technology that allows a server to push updates to the client over a single, long-lived HTTP connection, enabling real-time updates without requiring the client to continuously poll the server for new data.

mokksy.post {
  path = beEqual("/sse")
} respondsWithSseStream {
  flow =
    flow {
      delay(200.milliseconds)
      emit(
        ServerSentEvent(
          data = "One",
        ),
      )
      delay(50.milliseconds)
      emit(
        ServerSentEvent(
          data = "Two",
        ),
      )
    }
}

// when
val result = client.post("/sse")

// then
assertThat(result.status)
  .isEqualTo(HttpStatusCode.OK)
assertThat(result.contentType())
  .isEqualTo(ContentType.Text.EventStream.withCharset(Charsets.UTF_8))
assertThat(result.bodyAsText())
  .isEqualTo("data: One\r\ndata: Two\r\n")

AI-Mocks

AI-Mocks is a set of specialized mock server implementations (e.g., mocking OpenAI API) built using Mokksy.

It supports mocking following AI services:

  1. OpenAI - ai-mocks-openai
  2. Anthropic - ai-mocks-anthropic
  3. Google VertexAI Gemini - ai-mocks-gemini
  4. Ollama - ai-mocks-ollama
  5. Agent-to-Agent (A2A) Protocol - ai-mocks-a2a

Feature Support Matrix

Feature OpenAI Anthropic Gemini Ollama A2A
Chat Completions
Streaming
Embeddings
Moderation
Additional APIs Responses - - Generate Full A2A Protocol(11 endpoints)

How to build

Building project locally:

./gradlew build

or using Make:

make

Contributing

I do welcome contributions! Please see the Contributing Guidelines for details.

Capabilities

StreamingPush NotificationsMulti-TurnAuth: none
agent-to-agentkotlinktorktor-serverlangchain4jllmmockmock-serveropenai-apiservice-virtualization
View on GitHub