Skip to content

MCP Tools Reference

BrowseHand MCP server provides the following tools. When you make natural language requests to Claude Desktop, the appropriate tools are automatically invoked.

ToolDescription
ping_extensionCheck extension connection status
read_browser_contentRead text content from current tab
execute_scriptExecute JavaScript code
extract_structured_dataExtract structured data from repeating elements
click_elementClick element by CSS selector
scroll_pageScroll page or specific element
wait_for_elementWait for element to appear
navigate_toNavigate to a URL
get_current_urlGet current URL
get_dom_snapshotGet DOM structure snapshot
save_to_csvSave data to CSV file
save_to_jsonSave data to JSON file

Check the connection status with Chrome Extension.

Parameters: None

Example:

Check the extension connection status

Read HTML content from the currently active browser tab.

ParameterTypeRequiredDescription
selectorstringDOM selector to extract (default: body)

Examples:

Read the current page content
Read only the .main-content area using read_browser_content

Get the DOM structure (main tags and text) for AI analysis. Unnecessary tags (script, style, noscript, etc.) are automatically removed.

Parameters: None

Example:

Analyze this page's DOM structure

Get the current browser tab’s URL.

Parameters: None


Execute JavaScript code in the browser.

ParameterTypeRequiredDescription
codestringJavaScript code to execute

Examples:

Use execute_script to run document.title
Use execute_script to count all links on this page

Extract structured data from repeating elements.

ParameterTypeRequiredDescription
containerSelectorstringSelector for each repeating item container (e.g., .business-item)
fieldsobjectField definitions. Key is field name, value is relative selector within container
limitnumberMaximum items to extract (default: all)

Example:

Use extract_structured_data on .product-card with fields:
{ title: '.product-name', price: '.product-price', link: 'a' }

Result Example:

[
{ "title": "Product A", "price": "$10.00", "link": "/product/1" },
{ "title": "Product B", "price": "$20.00", "link": "/product/2" }
]

Click an element by CSS selector.

ParameterTypeRequiredDescription
selectorstringCSS selector of element to click
waitAfternumberMilliseconds to wait after click (default: 1000)

Examples:

Click the .next-button
Use click_element to click #submit-btn and wait 2 seconds

Scroll the browser page or a specific element.

ParameterTypeRequiredDescription
selectorstringCSS selector of element to scroll (scrolls entire page if not specified)
directionstringScroll direction: down, up, bottom, top
amountnumberPixels to scroll (only used with down/up)

Examples:

Scroll to the bottom of the page
Use scroll_page to scroll div[role="feed"] down by 500 pixels

Wait until a specific element appears.

ParameterTypeRequiredDescription
selectorstringCSS selector of element to wait for
timeoutnumberMaximum wait time in milliseconds (default: 10000)

Examples:

Wait until .loading disappears
Use wait_for_element to wait for .results to appear with 5 second timeout

Navigate the browser to a specific URL.

ParameterTypeRequiredDescription
urlstringURL to navigate to

Examples:

Go to Google Maps
Use navigate_to to go to https://www.google.com/maps

Save data to a CSV file.

ParameterTypeRequiredDescription
filenamestringFilename to save (e.g., leads.csv)
dataarrayArray of data to save. Each item must be an object
appendbooleanIf true, append to existing file; if false, overwrite (default: false)

Save Location: If no path is specified, files are saved to the Desktop.

Examples:

Save the extracted data to cafes.csv
Use save_to_csv to append data to results.csv (append: true)

Save data to a JSON file.

ParameterTypeRequiredDescription
filenamestringFilename to save (e.g., data.json)
dataobject/arrayData to save

Example:

Save the extracted data to results.json

1. navigate_to("https://www.google.com/maps/search/gangnam+cafe")
2. wait_for_element("[data-index]", timeout=10000)
3. Loop:
a. extract_structured_data(containerSelector="[data-index]", fields={...})
b. scroll_page(selector="div[role='feed']", direction="down", amount=500)
c. wait_for_element("[data-index]:last-child")
4. save_to_csv("gangnam_cafes.csv", collected_data)

As a natural language request to Claude:

Scrape "Gangnam cafe" search results from Google Maps.
Scroll to collect 30 business listings and save to CSV.
Make sure to scroll only the sidebar (selector: div[role="feed"]).