Getting Started
Create your account and make your first API request
Getting Started
This guide will help you set up Anyhunt and scrape your first webpage in just a few minutes.
Step 1: Create an Account
- Go to console.anyhunt.app
- Sign up with your email or GitHub account
- Verify your email address
Step 2: Get Your API Key
- Navigate to API Keys in the sidebar
- Click Create API Key
- Give your key a name (e.g., "Development")
- Copy your API key - you'll only see it once!
Store your API key securely. Never expose it in client-side code or public repositories.
Step 3: Make Your First Request
Use curl or your favorite HTTP client to scrape a webpage:
curl -X POST https://server.anyhunt.app/api/v1/scrape \
-H "Authorization: Bearer ah_your_api_key" \
-H "Content-Type: application/json" \
-d '{
"url": "https://example.com",
"formats": ["markdown", "screenshot"],
"onlyMainContent": true
}'Response
{
"id": "scrape_abc123",
"url": "https://example.com",
"markdown": "# Example Domain\n\nThis domain is for use in illustrative examples...",
"screenshot": {
"url": "https://cdn.anyhunt.app/scraper/scrape_abc123.png",
"width": 1280,
"height": 800,
"format": "png"
},
"metadata": {
"title": "Example Domain",
"description": "Example Domain for illustrative examples"
}
}Step 4: Use the Results
The response includes the extracted content in your requested formats:
- markdown - Clean, readable text extracted from the page
- screenshot - A CDN URL where your screenshot is hosted
- html - Cleaned HTML content
- links - All links found on the page
Next Steps
Output Formats
Learn about markdown, HTML, links, and screenshot options
API Reference
Explore all available APIs: Scrape, Crawl, Map, Extract, Search
Rate Limits
Understand quota and concurrency limits
Code Examples
Node.js
const response = await fetch('https://server.anyhunt.app/api/v1/scrape', {
method: 'POST',
headers: {
'Authorization': `Bearer ${process.env.ANYHUNT_API_KEY}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({
url: 'https://example.com',
formats: ['markdown', 'links'],
onlyMainContent: true,
}),
});
const data = await response.json();
console.log(data.markdown);
console.log(data.links);Python
import requests
response = requests.post(
'https://server.anyhunt.app/api/v1/scrape',
headers={
'Authorization': f'Bearer {API_KEY}',
'Content-Type': 'application/json',
},
json={
'url': 'https://example.com',
'formats': ['markdown', 'links'],
'onlyMainContent': True,
}
)
data = response.json()
print(data['markdown'])
print(data['links'])Go
package main
import (
"bytes"
"encoding/json"
"net/http"
)
func main() {
payload := map[string]interface{}{
"url": "https://example.com",
"formats": []string{"markdown", "links"},
"onlyMainContent": true,
}
body, _ := json.Marshal(payload)
req, _ := http.NewRequest("POST", "https://server.anyhunt.app/api/v1/scrape", bytes.NewBuffer(body))
req.Header.Set("Authorization", "Bearer "+apiKey)
req.Header.Set("Content-Type", "application/json")
client := &http.Client{}
resp, _ := client.Do(req)
defer resp.Body.Close()
}Common Use Cases
Web Scraping
Extract content from any webpage as clean markdown:
{
"url": "https://blog.example.com/article",
"formats": ["markdown"],
"onlyMainContent": true
}Screenshot Capture
Capture full-page screenshots for visual testing or archiving:
{
"url": "https://example.com",
"formats": ["screenshot"],
"screenshotOptions": {
"fullPage": true,
"format": "webp",
"quality": 90
}
}Link Discovery
Find all links on a page for SEO analysis or crawling:
{
"url": "https://example.com",
"formats": ["links"]
}Multi-Page Crawling
Crawl an entire website to extract content from multiple pages:
curl -X POST https://server.anyhunt.app/api/v1/crawl \
-H "Authorization: Bearer ah_your_api_key" \
-H "Content-Type: application/json" \
-d '{
"url": "https://docs.example.com",
"maxDepth": 2,
"limit": 50
}'See the Crawl API documentation for more details.