Public API

The public API node at serves as an experimental endpoint. It is offered for free to our best efforts.

You may

  • use it for prototyping of your tools
  • use it for testing

You may not:

  • expect it to be reliable
  • spam it with unnecessary load

Running your own node

You can run a similar node with rather low efforts assuming you know how to compile the official bitshares daemon

BitShares Daemon

This is the config.ini file for the witness_node:

rpc-endpoint =        # Accepts JSON-HTTP-RPC requests on localhost:28090
required-participation = false        # Do not fail if block
                                      # production stops or you are disconnected from
                                      # the p2p network
bucket-size = [15,60,300,3600,86400]  # The buckets (in seconds) for the market trade history
history-per-size = 1000               # Amount of buckets to store
max-ops-per-account = 1000            # Max amount of operations to store in the
                                      # database, per account
                                      # (drastically reduces RAM requirements)
partial-operations = true             # Remove old operation history
                                      # objects from RAM

This opens up the port 28090 for localhost. Going forward, you can either open up this port directly to the public, or tunnel it through a webserver (such as nginx) to add SSL on top, do load balancing, throttling etc.

Nginx Webserver uses a nginx server to

  • provide a readable websocket url
  • provide SSL encryption
  • perform throttling
  • allow load balancing

The configuration would look like this

upstream websockets {       # load balancing two nodes

server {
    listen 443 ssl;
    root /var/www/html/;

    # Force HTTPS (this may break some websocket clients that try to
    # connect via HTTP)
    if ($scheme != "https") {
            return 301 https://$host$request_uri;

    keepalive_timeout 65;
    keepalive_requests 100000;
    sendfile on;
    tcp_nopush on;
    tcp_nodelay on;

    ssl_certificate /etc/letsencrypt/live/;
    ssl_certificate_key /etc/letsencrypt/live/;
    ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
    ssl_prefer_server_ciphers on;
    ssl_dhparam /etc/ssl/certs/dhparam.pem;
    ssl_session_timeout 1d;
    ssl_session_cache shared:SSL:50m;
    ssl_stapling on;
    ssl_stapling_verify on;
    add_header Strict-Transport-Security max-age=15768000;

    location ~ ^(/|/ws) {
        limit_req zone=ws burst=5;
        access_log off;
        proxy_pass http://websockets;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header Host $host;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_next_upstream     error timeout invalid_header http_500;
        proxy_connect_timeout   2;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";

As you can see from the upstream block, the node actually uses a load balancing and failover across two locally running witness_node nodes. This allows to upgrade the code and reply one one while the other takes over the full traffic, and vise versa.