Rank
83
A Model Context Protocol (MCP) server for GitLab
Traction
No public download signal
Freshness
Updated 2d ago
Crawler Summary
Google Gemini API integration for building AI-powered applications. Use when working with Google's Gemini API, Python SDK (google-genai), TypeScript SDK (@google/genai), multimodal inputs (image, video, audio, PDF), thinking/reasoning features, streaming responses, structured outputs with JSON schemas, multi-turn chat, system instructions, image generation (Nano Banana), video generation (Veo), music generation (Lyria), embeddings, document/PDF processing, or any Gemini API integration task. Triggers on mentions of Gemini, Gemini 3, Gemini 2.5, Google AI, Nano Banana, Veo, Lyria, google-genai, or @google/genai SDK usage. --- name: gemini-api description: Google Gemini API integration for building AI-powered applications. Use when working with Google's Gemini API, Python SDK (google-genai), TypeScript SDK (@google/genai), multimodal inputs (image, video, audio, PDF), thinking/reasoning features, streaming responses, structured outputs with JSON schemas, multi-turn chat, system instructions, image generation (Nano Banana), video genera Published capability contract available. No trust telemetry is available yet. 4 GitHub stars reported by the source. Last updated 2/24/2026.
Freshness
Last checked 2/22/2026
Best For
Contract is available with explicit auth and schema references.
Not Ideal For
gemini-api is not ideal for teams that need stronger public trust telemetry, lower setup complexity, or more explicit contract coverage before production rollout.
Evidence Sources Checked
editorial-content, capability-contract, runtime-metrics, public facts pack
Google Gemini API integration for building AI-powered applications. Use when working with Google's Gemini API, Python SDK (google-genai), TypeScript SDK (@google/genai), multimodal inputs (image, video, audio, PDF), thinking/reasoning features, streaming responses, structured outputs with JSON schemas, multi-turn chat, system instructions, image generation (Nano Banana), video generation (Veo), music generation (Lyria), embeddings, document/PDF processing, or any Gemini API integration task. Triggers on mentions of Gemini, Gemini 3, Gemini 2.5, Google AI, Nano Banana, Veo, Lyria, google-genai, or @google/genai SDK usage. --- name: gemini-api description: Google Gemini API integration for building AI-powered applications. Use when working with Google's Gemini API, Python SDK (google-genai), TypeScript SDK (@google/genai), multimodal inputs (image, video, audio, PDF), thinking/reasoning features, streaming responses, structured outputs with JSON schemas, multi-turn chat, system instructions, image generation (Nano Banana), video genera
Public facts
7
Change events
1
Artifacts
0
Freshness
Feb 22, 2026
Published capability contract available. No trust telemetry is available yet. 4 GitHub stars reported by the source. Last updated 2/24/2026.
Trust score
Unknown
Compatibility
MCP
Freshness
Feb 22, 2026
Vendor
Diskd Ai
Artifacts
0
Benchmarks
0
Last release
Unpublished
Key links, install path, and a quick operational read before the deeper crawl record.
Summary
Published capability contract available. No trust telemetry is available yet. 4 GitHub stars reported by the source. Last updated 2/24/2026.
Setup snapshot
git clone https://github.com/diskd-ai/gemini-api.gitSetup complexity is LOW. This package is likely designed for quick installation with minimal external side-effects.
Final validation: Expose the agent to a mock request payload inside a sandbox and trace the network egress before allowing access to real customer data.
Everything public we have scraped or crawled about this agent, grouped by evidence type with provenance.
Vendor
Diskd Ai
Protocol compatibility
MCP
Auth modes
mcp, api_key
Machine-readable schemas
OpenAPI or schema references published
Adoption signal
4 GitHub stars
Handshake status
UNKNOWN
Crawlable docs
6 indexed pages on the official domain
Merged public release, docs, artifact, benchmark, pricing, and trust refresh events.
Extracted files, examples, snippets, parameters, dependencies, permissions, and artifact metadata.
Extracted files
0
Examples
6
Snippets
0
Languages
typescript
Parameters
python
from google import genai
client = genai.Client()
response = client.models.generate_content(
model="gemini-3-flash-preview",
contents="How does AI work?"
)
print(response.text)javascript
import { GoogleGenAI } from "@google/genai";
const ai = new GoogleGenAI({});
const response = await ai.models.generateContent({
model: "gemini-3-flash-preview",
contents: "How does AI work?",
});
console.log(response.text);bash
curl "https://generativelanguage.googleapis.com/v1beta/models/gemini-3-flash-preview:generateContent" \
-H "x-goog-api-key: $GEMINI_API_KEY" \
-H 'Content-Type: application/json' \
-d '{"contents": [{"parts": [{"text": "How does AI work?"}]}]}'bash
curl "https://generativelanguage.googleapis.com/v1beta/models/gemini-3-flash-preview:generateContent" \
-H "x-goog-api-key: $GEMINI_API_KEY" \
-H 'Content-Type: application/json' \
-d '{"contents": [{"parts": [{"text": "How does AI work?"}]}]}'python
response = client.models.generate_content(
model="gemini-3-flash-preview",
config=types.GenerateContentConfig(
system_instruction="You are a helpful assistant."
),
contents="Hello"
)javascript
const response = await ai.models.generateContent({
model: "gemini-3-flash-preview",
contents: "Hello",
config: { systemInstruction: "You are a helpful assistant." },
});Full documentation captured from public sources, including the complete README when available.
Docs source
GITHUB OPENCLEW
Editorial quality
ready
Google Gemini API integration for building AI-powered applications. Use when working with Google's Gemini API, Python SDK (google-genai), TypeScript SDK (@google/genai), multimodal inputs (image, video, audio, PDF), thinking/reasoning features, streaming responses, structured outputs with JSON schemas, multi-turn chat, system instructions, image generation (Nano Banana), video generation (Veo), music generation (Lyria), embeddings, document/PDF processing, or any Gemini API integration task. Triggers on mentions of Gemini, Gemini 3, Gemini 2.5, Google AI, Nano Banana, Veo, Lyria, google-genai, or @google/genai SDK usage. --- name: gemini-api description: Google Gemini API integration for building AI-powered applications. Use when working with Google's Gemini API, Python SDK (google-genai), TypeScript SDK (@google/genai), multimodal inputs (image, video, audio, PDF), thinking/reasoning features, streaming responses, structured outputs with JSON schemas, multi-turn chat, system instructions, image generation (Nano Banana), video genera
Generate text from text, images, video, and audio using Google's Gemini API.
| Model | Code | I/O | Context | Thinking |
|-------|------|-----|---------|----------|
| Gemini 3 Pro | gemini-3-pro-preview | Text/Image/Video/Audio/PDF -> Text | 1M/64K | Yes |
| Gemini 3 Flash | gemini-3-flash-preview | Text/Image/Video/Audio/PDF -> Text | 1M/64K | Yes |
| Gemini 2.5 Pro | gemini-2.5-pro | Text/Image/Video/Audio/PDF -> Text | 1M/65K | Yes |
| Gemini 2.5 Flash | gemini-2.5-flash | Text/Image/Video/Audio -> Text | 1M/65K | Yes |
| Nano Banana | gemini-2.5-flash-image | Text/Image -> Image | - | No |
| Nano Banana Pro | gemini-3-pro-image-preview | Text/Image -> Image (up to 4K) | 65K/32K | Yes |
| Veo 3.1 | veo-3.1-generate-preview | Text/Image/Video -> Video+Audio | - | - |
| Veo 3 | veo-3-generate-preview | Text/Image -> Video+Audio | - | - |
| Veo 2 | veo-2.0-generate-001 | Text/Image -> Video (silent) | - | - |
| Lyria RealTime | lyria-realtime-exp | Text -> Music (streaming) | - | - |
| Embeddings | gemini-embedding-001 | Text -> Embeddings | 2K | No |
Free Tier: Flash models only (no free tier for gemini-3-pro-preview in API). Default Temperature: 1.0 (do not change for Gemini 3).
Pricing (per 1M tokens):
from google import genai
client = genai.Client()
response = client.models.generate_content(
model="gemini-3-flash-preview",
contents="How does AI work?"
)
print(response.text)
import { GoogleGenAI } from "@google/genai";
const ai = new GoogleGenAI({});
const response = await ai.models.generateContent({
model: "gemini-3-flash-preview",
contents: "How does AI work?",
});
console.log(response.text);
curl "https://generativelanguage.googleapis.com/v1beta/models/gemini-3-flash-preview:generateContent" \
-H "x-goog-api-key: $GEMINI_API_KEY" \
-H 'Content-Type: application/json' \
-d '{"contents": [{"parts": [{"text": "How does AI work?"}]}]}'
response = client.models.generate_content(
model="gemini-3-flash-preview",
config=types.GenerateContentConfig(
system_instruction="You are a helpful assistant."
),
contents="Hello"
)
const response = await ai.models.generateContent({
model: "gemini-3-flash-preview",
contents: "Hello",
config: { systemInstruction: "You are a helpful assistant." },
});
for chunk in client.models.generate_content_stream(
model="gemini-3-flash-preview",
contents="Tell me a story"
):
print(chunk.text, end="")
const response = await ai.models.generateContentStream({
model: "gemini-3-flash-preview",
contents: "Tell me a story",
});
for await (const chunk of response) {
console.log(chunk.text);
}
chat = client.chats.create(model="gemini-3-flash-preview")
response = chat.send_message("I have 2 dogs.")
print(response.text)
response = chat.send_message("How many paws total?")
print(response.text)
const chat = ai.chats.create({ model: "gemini-3-flash-preview" });
const response = await chat.sendMessage({ message: "I have 2 dogs." });
console.log(response.text);
from PIL import Image
image = Image.open("/path/to/image.png")
response = client.models.generate_content(
model="gemini-3-flash-preview",
contents=[image, "Describe this image"]
)
const image = await ai.files.upload({ file: "/path/to/image.png" });
const response = await ai.models.generateContent({
model: "gemini-3-flash-preview",
contents: [
createUserContent([
"Describe this image",
createPartFromUri(image.uri, image.mimeType),
]),
],
});
Process PDFs with native vision understanding (up to 1000 pages).
from google.genai import types
import pathlib
filepath = pathlib.Path('document.pdf')
response = client.models.generate_content(
model="gemini-3-flash-preview",
contents=[
types.Part.from_bytes(data=filepath.read_bytes(), mime_type='application/pdf'),
"Summarize this document"
]
)
import * as fs from 'fs';
const response = await ai.models.generateContent({
model: "gemini-3-flash-preview",
contents: [
{ text: "Summarize this document" },
{
inlineData: {
mimeType: 'application/pdf',
data: Buffer.from(fs.readFileSync("document.pdf")).toString("base64")
}
}
]
});
For large PDFs, use Files API (stored 48 hours):
uploaded_file = client.files.upload(file=pathlib.Path('large.pdf'))
response = client.models.generate_content(
model="gemini-3-flash-preview",
contents=[uploaded_file, "Summarize this document"]
)
See references/documents.md for Files API, multiple PDFs, and best practices.
Generate and edit images conversationally.
response = client.models.generate_content(
model="gemini-2.5-flash-image",
contents="Create a picture of a sunset over mountains",
)
for part in response.parts:
if part.inline_data is not None:
part.as_image().save("generated.png")
const response = await ai.models.generateContent({
model: "gemini-2.5-flash-image",
contents: "Create a picture of a sunset over mountains",
});
for (const part of response.candidates[0].content.parts) {
if (part.inlineData) {
const buffer = Buffer.from(part.inlineData.data, "base64");
fs.writeFileSync("generated.png", buffer);
}
}
Nano Banana Pro (gemini-3-pro-image-preview): 4K output, Google Search grounding, up to 14 reference images, conversational editing with thought signatures.
See references/image-generation.md for editing, multi-turn, and advanced features. See references/gemini-3.md for Gemini 3 image capabilities.
Generate 8-second 720p, 1080p, or 4K videos with native audio using Veo.
import time
from google import genai
client = genai.Client()
operation = client.models.generate_videos(
model="veo-3.1-generate-preview",
prompt="A cinematic shot of a majestic lion in the savannah at golden hour",
)
# Poll until complete (video generation is async)
while not operation.done:
time.sleep(10)
operation = client.operations.get(operation)
# Download the video
video = operation.response.generated_videos[0]
client.files.download(file=video.video)
video.video.save("lion.mp4")
let operation = await ai.models.generateVideos({
model: "veo-3.1-generate-preview",
prompt: "A cinematic shot of a majestic lion in the savannah at golden hour",
});
while (!operation.done) {
await new Promise(resolve => setTimeout(resolve, 10000));
operation = await ai.operations.getVideosOperation({ operation });
}
ai.files.download({
file: operation.response.generatedVideos[0].video,
downloadPath: "lion.mp4",
});
Veo 3.1 features: Portrait (9:16), video extension (up to 148s), 4K resolution, native audio with dialogue/SFX.
See references/veo.md for image-to-video, reference images, video extension, and prompting guide.
Generate continuous instrumental music in real-time with dynamic steering.
import asyncio
from google import genai
from google.genai import types
client = genai.Client()
async def main():
async with client.aio.live.music.connect(model='models/lyria-realtime-exp') as session:
# Set prompts and config
await session.set_weighted_prompts(
prompts=[types.WeightedPrompt(text='minimal techno', weight=1.0)]
)
await session.set_music_generation_config(
config=types.LiveMusicGenerationConfig(bpm=90, temperature=1.0)
)
# Start streaming
await session.play()
# Receive audio chunks
async for message in session.receive():
if message.server_content and message.server_content.audio_chunks:
audio_data = message.server_content.audio_chunks[0].data
# Process audio...
asyncio.run(main())
const session = await ai.live.music.connect({
model: "models/lyria-realtime-exp",
callbacks: {
onmessage: (message) => {
if (message.serverContent?.audioChunks) {
for (const chunk of message.serverContent.audioChunks) {
const audioBuffer = Buffer.from(chunk.data, "base64");
// Process audio...
}
}
},
},
});
await session.setWeightedPrompts({
weightedPrompts: [{ text: "minimal techno", weight: 1.0 }],
});
await session.setMusicGenerationConfig({
musicGenerationConfig: { bpm: 90, temperature: 1.0 },
});
await session.play();
Output: 48kHz stereo 16-bit PCM. Instrumental only. Configurable BPM, scale, density, brightness.
See references/lyria.md for steering music, configuration, and prompting guide.
Generate text embeddings for semantic similarity, search, and classification.
result = client.models.embed_content(
model="gemini-embedding-001",
contents="What is the meaning of life?"
)
print(result.embeddings)
const response = await ai.models.embedContent({
model: 'gemini-embedding-001',
contents: 'What is the meaning of life?',
});
console.log(response.embeddings);
Task types: SEMANTIC_SIMILARITY, CLASSIFICATION, CLUSTERING, RETRIEVAL_DOCUMENT, RETRIEVAL_QUERY
Output dimensions: 768, 1536, 3072 (default)
See references/embeddings.md for batch processing, task types, and normalization.
Control reasoning depth with thinking_level: minimal (Flash only), low, medium (Flash only), high (default).
from google.genai import types
response = client.models.generate_content(
model="gemini-3-flash-preview",
contents="Solve this math problem...",
config=types.GenerateContentConfig(
thinking_config=types.ThinkingConfig(thinking_level="high")
),
)
import { ThinkingLevel } from "@google/genai";
const response = await ai.models.generateContent({
model: "gemini-3-flash-preview",
contents: "Solve this math problem...",
config: { thinkingConfig: { thinkingLevel: ThinkingLevel.HIGH } },
});
Note: Cannot mix thinking_level with legacy thinking_budget (returns 400 error).
For Gemini 2.5, use thinking_budget (0-32768) instead. See references/thinking.md.
For complete Gemini 3 features (thought signatures, media resolution, etc.), see references/gemini-3.md.
Generate JSON responses adhering to a schema.
from pydantic import BaseModel
from typing import List
class Recipe(BaseModel):
name: str
ingredients: List[str]
response = client.models.generate_content(
model="gemini-3-flash-preview",
contents="Extract: chocolate chip cookies need flour, sugar, chips",
config={
"response_mime_type": "application/json",
"response_json_schema": Recipe.model_json_schema(),
},
)
recipe = Recipe.model_validate_json(response.text)
import { z } from "zod";
import { zodToJsonSchema } from "zod-to-json-schema";
const recipeSchema = z.object({
name: z.string(),
ingredients: z.array(z.string()),
});
const response = await ai.models.generateContent({
model: "gemini-3-flash-preview",
contents: "Extract: chocolate chip cookies need flour, sugar, chips",
config: {
responseMimeType: "application/json",
responseJsonSchema: zodToJsonSchema(recipeSchema),
},
});
See references/structured-outputs.md for advanced patterns.
Available: Google Search, File Search, Code Execution, URL Context, Function Calling
Not supported: Google Maps grounding, Computer Use (use Gemini 2.5 for these)
response = client.models.generate_content(
model="gemini-3-pro-preview",
contents="What's the latest news on AI?",
config={"tools": [{"google_search": {}}]},
)
const response = await ai.models.generateContent({
model: "gemini-3-pro-preview",
contents: "What's the latest news on AI?",
config: { tools: [{ googleSearch: {} }] },
});
Structured outputs + tools: Gemini 3 supports combining JSON schemas with built-in tools (Google Search, URL Context, Code Execution). See references/gemini-3.md.
See references/tools.md for all tool patterns.
Connect models to external tools and APIs. The model determines when to call functions and provides parameters.
from google.genai import types
# Define function
get_weather = {
"name": "get_weather",
"description": "Get weather for a location",
"parameters": {
"type": "object",
"properties": {
"location": {"type": "string", "description": "City name"},
},
"required": ["location"],
},
}
response = client.models.generate_content(
model="gemini-3-flash-preview",
contents="What's the weather in Tokyo?",
config=types.GenerateContentConfig(
tools=[types.Tool(function_declarations=[get_weather])]
),
)
# Check for function call
if response.function_calls:
fc = response.function_calls[0]
print(f"Call {fc.name} with {fc.args}")
const response = await ai.models.generateContent({
model: "gemini-3-flash-preview",
contents: "What's the weather in Tokyo?",
config: {
tools: [{ functionDeclarations: [getWeather] }],
},
});
if (response.functionCalls) {
const { name, args } = response.functionCalls[0];
// Execute function and send result back
}
Automatic function calling (Python): Pass functions directly as tools for automatic execution.
See references/function-calling.md for execution modes, compositional calling, multimodal responses, MCP integration, and best practices.
| Feature | Python | JavaScript |
|---------|--------|------------|
| Generate | generate_content() | generateContent() |
| Stream | generate_content_stream() | generateContentStream() |
| Chat | chats.create() | chats.create() |
| Structured | response_json_schema= | responseJsonSchema: |
| Image Gen | gemini-2.5-flash-image | gemini-2.5-flash-image |
| Video Gen | generate_videos() | generateVideos() |
| Music Gen | live.music.connect() | live.music.connect() |
| Function Call | function_declarations | functionDeclarations |
| Embeddings | embed_content() | embedContent() |
| Files API | files.upload() | files.upload() |
For advanced Gemini 3 features, see references/gemini-3.md:
minimal, low, medium, high)media_resolution_low to ultra_high)Machine endpoints, protocol fit, contract coverage, invocation examples, and guardrails for agent-to-agent use.
Contract coverage
Status
ready
Auth
mcp, api_key
Streaming
Yes
Data region
global
Protocol support
Requires: mcp, lang:typescript, streaming
Forbidden: none
Guardrails
Operational confidence: medium
curl -s "https://xpersona.co/api/v1/agents/diskd-ai-gemini-api/snapshot"
curl -s "https://xpersona.co/api/v1/agents/diskd-ai-gemini-api/contract"
curl -s "https://xpersona.co/api/v1/agents/diskd-ai-gemini-api/trust"
Trust and runtime signals, benchmark suites, failure patterns, and practical risk constraints.
Trust signals
Handshake
UNKNOWN
Confidence
unknown
Attempts 30d
unknown
Fallback rate
unknown
Runtime metrics
Observed P50
unknown
Observed P95
unknown
Rate limit
unknown
Estimated cost
unknown
Every public screenshot, visual asset, demo link, and owner-provided destination tied to this agent.
Neighboring agents from the same protocol and source ecosystem for comparison and shortlist building.
Rank
83
A Model Context Protocol (MCP) server for GitLab
Traction
No public download signal
Freshness
Updated 2d ago
Rank
80
A Model Context Protocol (MCP) server for GitLab
Traction
No public download signal
Freshness
Updated 2d ago
Rank
74
Expose OpenAPI definition endpoints as MCP tools using the official Rust SDK for the Model Context Protocol (https://github.com/modelcontextprotocol/rust-sdk)
Traction
No public download signal
Freshness
Updated 2d ago
Rank
72
An actix_web backend for the official Rust SDK for the Model Context Protocol (https://github.com/modelcontextprotocol/rust-sdk)
Traction
No public download signal
Freshness
Updated 2d ago
Contract JSON
{
"contractStatus": "ready",
"authModes": [
"mcp",
"api_key"
],
"requires": [
"mcp",
"lang:typescript",
"streaming"
],
"forbidden": [],
"supportsMcp": true,
"supportsA2a": false,
"supportsStreaming": true,
"inputSchemaRef": "https://github.com/diskd-ai/gemini-api#input",
"outputSchemaRef": "https://github.com/diskd-ai/gemini-api#output",
"dataRegion": "global",
"contractUpdatedAt": "2026-02-24T19:44:26.372Z",
"sourceUpdatedAt": "2026-02-24T19:44:26.372Z",
"freshnessSeconds": 4423263
}Invocation Guide
{
"preferredApi": {
"snapshotUrl": "https://xpersona.co/api/v1/agents/diskd-ai-gemini-api/snapshot",
"contractUrl": "https://xpersona.co/api/v1/agents/diskd-ai-gemini-api/contract",
"trustUrl": "https://xpersona.co/api/v1/agents/diskd-ai-gemini-api/trust"
},
"curlExamples": [
"curl -s \"https://xpersona.co/api/v1/agents/diskd-ai-gemini-api/snapshot\"",
"curl -s \"https://xpersona.co/api/v1/agents/diskd-ai-gemini-api/contract\"",
"curl -s \"https://xpersona.co/api/v1/agents/diskd-ai-gemini-api/trust\""
],
"jsonRequestTemplate": {
"query": "summarize this repo",
"constraints": {
"maxLatencyMs": 2000,
"protocolPreference": [
"MCP"
]
}
},
"jsonResponseTemplate": {
"ok": true,
"result": {
"summary": "...",
"confidence": 0.9
},
"meta": {
"source": "GITHUB_OPENCLEW",
"generatedAt": "2026-04-17T00:25:29.823Z"
}
},
"retryPolicy": {
"maxAttempts": 3,
"backoffMs": [
500,
1500,
3500
],
"retryableConditions": [
"HTTP_429",
"HTTP_503",
"NETWORK_TIMEOUT"
]
}
}Trust JSON
{
"status": "unavailable",
"handshakeStatus": "UNKNOWN",
"verificationFreshnessHours": null,
"reputationScore": null,
"p95LatencyMs": null,
"successRate30d": null,
"fallbackRate": null,
"attempts30d": null,
"trustUpdatedAt": null,
"trustConfidence": "unknown",
"sourceUpdatedAt": null,
"freshnessSeconds": null
}Capability Matrix
{
"rows": [
{
"key": "MCP",
"type": "protocol",
"support": "supported",
"confidenceSource": "contract",
"notes": "Confirmed by capability contract"
},
{
"key": "combining",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
}
],
"flattenedTokens": "protocol:MCP|supported|contract capability:combining|supported|profile"
}Facts JSON
[
{
"factKey": "docs_crawl",
"category": "integration",
"label": "Crawlable docs",
"value": "6 indexed pages on the official domain",
"href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceType": "search_document",
"confidence": "medium",
"observedAt": "2026-04-15T05:03:46.393Z",
"isPublic": true
},
{
"factKey": "protocols",
"category": "compatibility",
"label": "Protocol compatibility",
"value": "MCP",
"href": "https://xpersona.co/api/v1/agents/diskd-ai-gemini-api/contract",
"sourceUrl": "https://xpersona.co/api/v1/agents/diskd-ai-gemini-api/contract",
"sourceType": "contract",
"confidence": "high",
"observedAt": "2026-02-24T19:44:26.372Z",
"isPublic": true
},
{
"factKey": "auth_modes",
"category": "compatibility",
"label": "Auth modes",
"value": "mcp, api_key",
"href": "https://xpersona.co/api/v1/agents/diskd-ai-gemini-api/contract",
"sourceUrl": "https://xpersona.co/api/v1/agents/diskd-ai-gemini-api/contract",
"sourceType": "contract",
"confidence": "high",
"observedAt": "2026-02-24T19:44:26.372Z",
"isPublic": true
},
{
"factKey": "schema_refs",
"category": "artifact",
"label": "Machine-readable schemas",
"value": "OpenAPI or schema references published",
"href": "https://github.com/diskd-ai/gemini-api#input",
"sourceUrl": "https://xpersona.co/api/v1/agents/diskd-ai-gemini-api/contract",
"sourceType": "contract",
"confidence": "high",
"observedAt": "2026-02-24T19:44:26.372Z",
"isPublic": true
},
{
"factKey": "vendor",
"category": "vendor",
"label": "Vendor",
"value": "Diskd Ai",
"href": "https://github.com/diskd-ai/gemini-api",
"sourceUrl": "https://github.com/diskd-ai/gemini-api",
"sourceType": "profile",
"confidence": "medium",
"observedAt": "2026-02-24T19:43:14.176Z",
"isPublic": true
},
{
"factKey": "traction",
"category": "adoption",
"label": "Adoption signal",
"value": "4 GitHub stars",
"href": "https://github.com/diskd-ai/gemini-api",
"sourceUrl": "https://github.com/diskd-ai/gemini-api",
"sourceType": "profile",
"confidence": "medium",
"observedAt": "2026-02-24T19:43:14.176Z",
"isPublic": true
},
{
"factKey": "handshake_status",
"category": "security",
"label": "Handshake status",
"value": "UNKNOWN",
"href": "https://xpersona.co/api/v1/agents/diskd-ai-gemini-api/trust",
"sourceUrl": "https://xpersona.co/api/v1/agents/diskd-ai-gemini-api/trust",
"sourceType": "trust",
"confidence": "medium",
"observedAt": null,
"isPublic": true
}
]Change Events JSON
[
{
"eventType": "docs_update",
"title": "Docs refreshed: Sign in to GitHub · GitHub",
"description": "Fresh crawlable documentation was indexed for the official domain.",
"href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceType": "search_document",
"confidence": "medium",
"observedAt": "2026-04-15T05:03:46.393Z",
"isPublic": true
}
]Sponsored
Ads related to gemini-api and adjacent AI workflows.