Skip to main content

How to Use the Scraping API Dashboard (Step-by-Step Guide)

Proxyrack avatar
Written by Proxyrack
Updated over a week ago

The Scraping API dashboard helps you generate a ready-to-use scraping URL that retrieves raw HTML data from a target website using Proxyrack’s infrastructure.

This guide explains:

  • What the dashboard does

  • How to generate a scraping URL

  • How users typically use that URL

  • What the Scraping API does and does not do

The Web Scraper API lets you get a website content using simple API requests
You send a request with the URL you want to scrape, and our system fetches the page and returns the content (HTML). If something goes wrong, the API returns an error response.

What the Scraping API does (important to set expectations)

The Scraping API only handles data access, meaning it:

  • Loads the target webpage

  • Bypasses common blocking mechanisms

  • Returns the raw HTML response

It does not:

  • Extract products, prices, or users automatically

  • Convert data into JSON, CSV, or structured formats

  • Replace a custom scraping or parsing script

Step-by-step: Using the Scraping API dashboard

Step 1: Open the Request Builder

Go to your dashboard and open the Scraping API → Request Builder section.

Step 2: Enter the target URL

Paste the full URL of the page you want to scrape.

Example:

https://www.example-store.com/products

Step 3: Configure request options (optional)

Depending on your use case, you can select:

  • Country (IP location)

  • Device type (desktop or mobile)

  • Proxy session

  • Premium proxies (if needed)

These options affect how the page is loaded.

Step 4: Generate the scraping URL

Once the form is completed, the dashboard automatically generates a full scraping request URL.

This URL includes:

  • Your API key

  • Target URL

  • Selected options

Step 5: Copy the generated URL

Click Copy to copy the generated scraping URL.

How to use the generated scraping URL

Option A: Browser (testing only)

You can paste the scraping URL directly into your browser.

What happens

  • The browser loads the request

  • You receive the raw HTML response

Best for

  • Quick testing

  • Verifying the page loads correctly

Option B: Terminal or script (recommended)

Most users call the scraping URL from:

  • Terminal (curl, wget)

  • Python, Node.js, or other scripts

Why

  • Allows automation

  • Enables data parsing

  • Works at scale

Understanding parsing

What parsing means

Parsing is the process of extracting useful data (e.g. product name, price, SKU) from raw HTML.

Example:

If you want to scrape products from an e-commerce store, you need:

  1. Raw HTML (provided by Proxyrack)

  2. A custom parsing script (written by you)

Each website:

  • Has a different structure

  • Requires a different parser

There is no universal parser that works for all websites.

Typical real-world example

Goal: Extract product data from an online store

You need:

  1. Scraping API URL → gets the HTML

  2. A script that:

    • Reads the HTML

    • Locates product elements

    • Outputs structured data (JSON, CSV, database, etc.)

Proxyrack provides Step 1 only.
Step 2 is handled by the user or their developer.

Who is this tool for?

The Scraping API is best suited for:

  • Developers

  • Data engineers

  • Technical teams

  • Users already familiar with scraping workflows

Basic users can still test requests via browser, but full usage requires scripting knowledge.

Common questions (FAQ-style):

Does Proxyrack parse products for me?
No. We return raw HTML only.

Can I get structured data automatically?
No. You must create or use your own parser.

Why does each site need a different parser?
Because every website has a unique HTML structure.

Is the terminal required?
Not required, but strongly recommended for real use cases.

The Scraping API is a building block, not a full scraping solution.
It gives you reliable access to data — what you do with that data is up to you.

Did this answer your question?