LLM Frameworks Integration

Integrate the LookC MCP server with popular LLM frameworks to enhance your AI agents with comprehensive company research capabilities.


Overview

The LookC MCP server seamlessly integrates with leading LLM frameworks through the standardized Model Context Protocol. This enables your AI agents to access real-time company data while maintaining framework-specific workflows and patterns.

CrewAI

Multi-agent orchestration with company research capabilities for complex workflows.

OpenAI Assistants

Function calling integration for GPT-4 and other OpenAI models with LookC data.

Custom Frameworks

Standard HTTP interface works with any framework supporting external tool calls.

CrewAI Integration

CrewAI's multi-agent architecture pairs perfectly with LookC's company research tools, enabling sophisticated prospect research and competitive analysis workflows.

Installation and Setup

CrewAI with LookC MCP

pip install crewai
pip install requests  # For HTTP MCP client

CrewAI Agent Configuration

CrewAI agent with LookC tools

from crewai import Agent, Task, Crew
from crewai_tools import BaseTool

class LookCEmployeeTool(BaseTool):
    name: str = "lookc_employee_search"
    description: str = "Search for employees at a company using LinkedIn URL or organization ID"
    
    def __init__(self, mcp_client: LookCMCPClient):
        super().__init__()
        self.mcp_client = mcp_client
    
    def _run(self, linkedin_url: str = None, org_id: str = None, 
             title_regex: str = None, seniority_levels: list = None) -> str:
        
        if linkedin_url:
            result = self.mcp_client.call_tool(
                "get_employee_list_by_linkedin",
                {
                    "linkedin_url": linkedin_url,
                    "title_regex": title_regex,
                    "seniority_levels": seniority_levels or [],
                    "limit": 50
                }
            )
        elif org_id:
            result = self.mcp_client.call_tool(
                "get_employee_list",
                {
                    "org_id": org_id,
                    "title_regex": title_regex,
                    "seniority_levels": seniority_levels or [],
                    "limit": 50
                }
            )
        else:
            return "Error: Must provide either linkedin_url or org_id"
        
        return json.dumps(result, indent=2)

# Initialize MCP client
mcp_client = LookCMCPClient(api_key="your_lookc_api_key")
lookc_tool = LookCEmployeeTool(mcp_client)

# Create research agent
research_agent = Agent(
    role='Company Research Specialist',
    goal='Gather comprehensive employee and organizational intelligence',
    backstory="""You are an expert at researching companies and their employees. 
    You use LookC's database to find detailed information about organizations, 
    their structure, and key personnel.""",
    tools=[lookc_tool],
    verbose=True
)

Multi-Agent Workflow Example

Prospect research workflow

# Define research task
research_task = Task(
    description="""Research the engineering team at OpenAI:
    1. Find the LinkedIn company page for OpenAI
    2. Get a list of engineers and technical leaders
    3. Focus on senior engineers, managers, and directors
    4. Provide a summary of the team structure and key personnel
    
    Company: OpenAI
    Focus: Engineering and technical roles
    Seniority: MANAGER, VP_DIRECTOR, PARTNER_CXO""",
    agent=research_agent,
    expected_output="Detailed report on OpenAI's engineering team structure and key personnel"
)

# Create and run crew
crew = Crew(
    agents=[research_agent],
    tasks=[research_task],
    verbose=2
)

result = crew.kickoff()
print(result)

OpenAI Assistants API

Integrate LookC company research capabilities directly into OpenAI's Assistants API using function calling.

Function Definitions

OpenAI function definitions

import openai
from openai import OpenAI

client = OpenAI(api_key="your_openai_api_key")

# Define LookC tools as OpenAI functions
lookc_functions = [
    {
        "type": "function",
        "function": {
            "name": "get_company_employees",
            "description": "Get employee list for a company using LinkedIn URL",
            "parameters": {
                "type": "object",
                "properties": {
                    "linkedin_url": {
                        "type": "string",
                        "description": "LinkedIn company page URL"
                    },
                    "title_filter": {
                        "type": "string",
                        "description": "Regex pattern to filter by job title"
                    },
                    "seniority_levels": {
                        "type": "array",
                        "items": {
                            "type": "string",
                            "enum": ["PARTNER_CXO", "VP_DIRECTOR", "MANAGER", "LOWER"]
                        },
                        "description": "Filter by seniority levels"
                    }
                },
                "required": ["linkedin_url"]
            }
        }
    },
    {
        "type": "function", 
        "function": {
            "name": "get_organization_ids",
            "description": "Convert LinkedIn URL to LookC organization identifier",
            "parameters": {
                "type": "object",
                "properties": {
                    "linkedin_url": {
                        "type": "string",
                        "description": "LinkedIn company page URL"
                    }
                },
                "required": ["linkedin_url"]
            }
        }
    }
]

Assistant Creation and Function Handling

OpenAI Assistant with LookC

# Create assistant with LookC functions
assistant = client.beta.assistants.create(
    name="Company Research Assistant",
    instructions="""You are a company research specialist with access to LookC's 
    comprehensive database. Help users research companies, find employees, and 
    analyze organizational structures. Always provide detailed, accurate information 
    and cite your sources.""",
    tools=lookc_functions,
    model="gpt-4-1106-preview"
)

def handle_function_call(function_name: str, arguments: dict) -> str:
    """Handle function calls by routing to LookC MCP server"""
    mcp_client = LookCMCPClient(api_key="your_lookc_api_key")
    
    if function_name == "get_company_employees":
        result = mcp_client.call_tool(
            "get_employee_list_by_linkedin",
            {
                "linkedin_url": arguments["linkedin_url"],
                "title_regex": arguments.get("title_filter"),
                "seniority_levels": arguments.get("seniority_levels", []),
                "limit": 100
            }
        )
    elif function_name == "get_organization_ids":
        result = mcp_client.call_tool(
            "get_organization_ids",
            arguments
        )
    else:
        return f"Unknown function: {function_name}"
    
    return json.dumps(result)

Complete Conversation Flow

Assistant conversation

# Create thread and message
thread = client.beta.threads.create()

message = client.beta.threads.messages.create(
    thread_id=thread.id,
    role="user",
    content="Research the engineering team at Stripe. I want to understand their technical leadership structure."
)

# Run assistant
run = client.beta.threads.runs.create(
    thread_id=thread.id,
    assistant_id=assistant.id
)

# Handle function calls
while run.status in ['queued', 'in_progress', 'requires_action']:
    if run.status == 'requires_action':
        tool_calls = run.required_action.submit_tool_outputs.tool_calls
        tool_outputs = []
        
        for tool_call in tool_calls:
            function_name = tool_call.function.name
            arguments = json.loads(tool_call.function.arguments)
            
            output = handle_function_call(function_name, arguments)
            
            tool_outputs.append({
                "tool_call_id": tool_call.id,
                "output": output
            })
        
        # Submit tool outputs
        run = client.beta.threads.runs.submit_tool_outputs(
            thread_id=thread.id,
            run_id=run.id,
            tool_outputs=tool_outputs
        )
    
    # Wait and check status
    time.sleep(1)
    run = client.beta.threads.runs.retrieve(thread_id=thread.id, run_id=run.id)

# Get final response
messages = client.beta.threads.messages.list(thread_id=thread.id)
print(messages.data[0].content[0].text.value)

Configuration Examples

Environment Configuration

Environment setup

# LookC MCP Server
LOOKC_API_KEY=your_lookc_api_key_here
LOOKC_MCP_ENDPOINT=https://api.lookc.io/mcp

# Framework-specific keys
OPENAI_API_KEY=your_openai_api_key
CREWAI_API_KEY=your_crewai_api_key

# Optional: Rate limiting
LOOKC_RATE_LIMIT=100
LOOKC_TIMEOUT=30

Error Handling and Retry Logic

Production-ready client

import time
import logging
from typing import Optional, Dict, Any

class ProductionLookCClient:
    def __init__(self, config: LookCConfig):
        self.config = config
        self.session = requests.Session()
        self.session.headers.update({
            "Content-Type": "application/json",
            "Authorization": f"Bearer {config.api_key}"
        })
        
    def call_tool_with_retry(self, tool_name: str, arguments: Dict[str, Any], 
                           max_retries: int = 3) -> Optional[Dict[str, Any]]:
        """Call LookC tool with exponential backoff retry logic"""
        
        for attempt in range(max_retries):
            try:
                payload = {
                    "jsonrpc": "2.0",
                    "id": int(time.time()),
                    "method": "tools/call",
                    "params": {
                        "name": tool_name,
                        "arguments": arguments
                    }
                }
                
                response = self.session.post(
                    self.config.endpoint,
                    json=payload,
                    timeout=self.config.timeout
                )
                
                if response.status_code == 429:  # Rate limited
                    wait_time = 2 ** attempt
                    logging.warning(f"Rate limited, waiting {wait_time}s")
                    time.sleep(wait_time)
                    continue
                    
                response.raise_for_status()
                result = response.json()
                
                if "error" in result:
                    logging.error(f"LookC API error: {result['error']}")
                    return None
                    
                return result
                
            except requests.exceptions.RequestException as e:
                logging.error(f"Request failed (attempt {attempt + 1}): {e}")
                if attempt == max_retries - 1:
                    raise
                time.sleep(2 ** attempt)
        
        return None

Best Practices

Rate Limiting

Implement exponential backoff and respect rate limit headers to avoid API throttling.

Error Handling

Always handle API errors gracefully and provide meaningful feedback to users.

For more integration options, see our guides for Claude Desktop, Development Tools, and Custom Agents.

Was this page helpful?