Skip to main content

Complete Working Example

This example demonstrates a real automation workflow using Wikipedia, a site that’s stable and doesn’t have aggressive bot detection.
from tzafon import Computer

client = Computer()

with client.create(kind="browser") as computer:
    # Navigate to Wikipedia
    computer.navigate("https://wikipedia.org")
    computer.wait(2)

    # Take initial screenshot
    result = computer.screenshot()
    print(f"Homepage: {computer.get_screenshot_url(result)}")

    # Click search box (coordinates may vary by viewport)
    # Use screenshot to find the right coordinates for your setup
    computer.click(500, 250)
    computer.wait(0.5)

    # Type search query
    computer.type("Claude AI")
    computer.hotkey("enter")
    computer.wait(3)

    # Capture search results
    result = computer.screenshot()
    print(f"Results: {computer.get_screenshot_url(result)}")

    # Scroll down to see more content
    computer.scroll(dx=0, dy=500)
    computer.wait(1)

    # Final screenshot
    result = computer.screenshot()
    print(f"Scrolled: {computer.get_screenshot_url(result)}")

Important Notes

Coordinates are viewport-dependent. The coordinates (500, 250) for the search box may not work on your screen size. Always:
  1. Take a screenshot first
  2. Find the element’s position
  3. Use those specific coordinates
Many sites have bot detection. Sites like Google, Amazon, and social media platforms actively block automation. Always:
  1. Check the site’s robots.txt file
  2. Review their Terms of Service
  3. Use sites designed for testing (like Wikipedia) when learning
Use computer.wait() after navigation and interactions. Pages need time to load and respond.

Finding Coordinates

The most reliable way to find coordinates:
  1. Run a simple script to screenshot the page:
result = computer.screenshot()
url = computer.get_screenshot_url(result)
print(url)
  1. Open the screenshot in an image viewer
  2. Hover over elements to see pixel coordinates
  3. Use those coordinates in your script

Getting Page Content

Extract HTML for data processing:
from tzafon import Computer

client = Computer()
with client.create(kind="browser") as computer:
    computer.navigate("https://wikipedia.org")
    computer.wait(2)

    # Get page HTML
    result = computer.html()
    html_content = computer.get_html_content(result)

    # Process the HTML
    if html_content:
        print(f"Page length: {len(html_content)} characters")
        # Use BeautifulSoup, lxml, or similar to parse

Best Practices

  1. Always use wait() after navigation and interactions
  2. Check robots.txt before automating any site
  3. Use screenshots to debug coordinate issues
  4. Handle errors - check result.status for each action
  5. Use context managers (Python with) for automatic cleanup
  6. Start simple - get navigation + screenshot working first
  7. Test incrementally - add one action at a time

Sites Good for Testing

  • Wikipedia - Stable, no bot detection for basic viewing
  • books.toscrape.com - Practice scraping site
  • quotes.toscrape.com - Another practice site
  • httpbin.org - API testing and form submissions

Sites to Avoid When Learning

  • Google (aggressive CAPTCHA)
  • Amazon (bot detection)
  • Facebook/Twitter/LinkedIn (strict anti-automation)
  • Banking/financial sites (security measures + ToS violations)

Next Steps