Notes from Zaif Attack

The following is primarily a translation of this blog post.

On September 20, 2018, Tech Bureau sent out a notice that they suspended deposits and withdrawals for three currencies (BTC, MONA, BCH) on the Zaif cryptocurrency exchange due to unauthorized access to its systems. This post is an aggregation of the details of that event.

Press Releases

Tech Bureau

Incident Timeline

2018.09.14 between 17:00-19:00Approximately 6.7 billion JPY worth of assets were withdrawn without authorization.
2018.09.17Tech Bureau detected an anomaly within the environment.
- eveningTech Bureau suspended withdrawals/deposits for 3 currencies on Zaif.
2018.09.18Tech Bureau identified they had suffered a hacking incident.
- same dayTech Bureau reported the incident to the local finance bureau and started filing papers with the authorities.
- same dayThe official Zaif Twitter account tweeted that customer financial assets are safe.
- same dayIn accordance with the Payment Services Act, the FSA issued a Request for Report to Tech Bureau.
Post-identificationTech Bureau enters into a contract with Fisco for financial support.
Post-identificationTech Bureau enters into a contract with CAICA for assistance in improving security.
2018.09.20 ~2amTech Bureau issues a press release declaring that deposits/withdrawals were suspended due to a hacking operation.
- same dayThe Japan Cryptocurrency Business Association appealed for a member to perform an emergency inspection.
- same dayThe FSA sent an on-site inspection crew to Tech Bureau.
2018.09.21ETA for the FSA to issue a report on its investigation about the status of customer assets to the cryptocurrency exchange's traders.


  • Approximately 6.7 billion JPY worth of 3 different currencies were withdrawn externally without authorization.
  • Withdrawals and deposits for the 3 affected currencies have been suspended since the evening of 17 September.

Itemization of damages

Tech Bureau's own assets~2.2 billion JPY
Customer assets~4.5 billion JPY
  • Tech Bureau has shown that they can cover the 4.5b loss of customer assets through financial assistance from the FDAG subsidiary.

Information around the Zaif hack itself

  • Funds were withdrawn from the server managing the Zaif hot wallet.
  • Tech Bureau is still investigating the exact method of intrusion, but it doesn't look like they'll publicly announce it as a protective measure.

Details on the unauthorized transactions

Total (estimated) damages on the 3 currencies

CurrencyAmount transferredJPY conversionUSD conversion
Bitcoin5966 BTC4.295 billion JPY38.207 million USD
MonacoinUnder investigation, but sources estimate 6,236,810 MONA650 million JPY5.782 million USD
Bitcoin CashUnder investigation, but sources estimate 42,327 BCH2.019 billion JPY17.954 million USD

Assumed recipient addresses of the hack

CurrencyAddressTime of transaction
Bitcoin1FmwHh6pgkf4meCMoqo8fHH3GNRF571f9w2018.09.14, between 17:33:27 and 18:42:30
Bitcoin Cashqrn0jwaq3at5hhxsne8gmg5uemudl57r05pdzu2nyd2018.09.14, between 17:33:15 and 17:51:24
MonacoinMBEYH8JuAHynTA7unLjon7p7im2U9JbitV2018.09.14, between 17:39:01 and 18:54:10

work in progress

Disclaimer: I make no guarantees of the accuracy of the above article.
Please see the official press releases and/or PR department at Zaif. I am also not affiliated with Zaif or any of the companies mentioned in this article.

A Practical Behind the Scenes, Running Mastodon at Scale (Translation)

The following is a translation of this pixiv inside article.

Good morning! I'm harukasan, the technical lead for ImageFlux. 3 days ago at Pixiv, on April 14, we decided to do a spontaneous launch of Pawoo—and since then I've found myself constantly logged into Pawoo's server environment. Our infrastructure engineers have already configured our monitoring environment to monitor Pawoo as well as prepared runbooks for alert handling. As expected, we started receiving alerts for the two days following launch and, despite it being the weekend, found ourselves working off hours on keeping the service healthy. After all, no matter the environment, it's the job of infrastructure engineers to react to and resolve problems! Architecture

Let's take a look at the architecture behind Pawoo. If you perform a dig, you'll find that it's hosted on AWS. While we do operate a couple hundred physical servers here at Pixiv, it's not really that possible to procure and build up new ones so quickly. This is where cloud services shine. nojio, an infrastructure engineer who joined us this April, and konoiz, a recent graduate with 2 years of experience, prepared the following architecture diagram pretty quickly.

Pawoo Architecture Diagram Using as many of the services provided by AWS as we could, we were able to bring up this environment in about 5 hours and were able to launch the service later that day.

Dropping Docker

One can pretty easily bring up Mastodon using Docker containers via docker-compose, but we decided to not use Docker in order to separate services and deploy to multiple instances. It's a lot of extra effort to deal with volumes and cgroups, to name a few, when working with Docker containers - it's not hard to find yourself in sticky situations, like "Oh no, I accidentally deleted the volume container!" Mastodon does also provide a Production Guide for deploying without Docker.

So, after removing Docker from the picture, we decided to let systemd handle services. For example, the systemd unit file for the web application looks like the following:


ExecStart=/usr/local/rbenv/shims/bundle exec puma -C config/puma.rb
ExecReload=/bin/kill -USR1 $MAINPID


For RDB, Redis and the load balancer, we decided to use their AWS managed service counterparts. That way, we could quickly prepare a redundant multi-AZ data store. Since ALB supports WebSocket, we could easily distribute streaming as well. We're also utilizing S3 as our CDN/uploaded file store.

Utilizing AWS' managed services, we were able to launch Pawoo as fast as we could, but this is where we began to run into problems.

Tuning nginx

At launch, we had stuck with the default settings for nginx provided by the distro, but it didn't take too long before we started seeing HTTP errors returned so I decided to tweak the config a bit. That said, the important settings to increase are worker_rlimit_nofile and worker_connections.

user www-data;
worker_processes 4;
pid /run/;
worker_rlimit_nofile 65535;

events {
  worker_connections 8192;

http {
  include /etc/nginx/mime.types;
  default_type application/octet-stream;

  sendfile on;
  tcp_nopush on;
  keepalive_timeout 15;
  server_tokens off;

  log_format global 'time_iso8601:$time_iso8601\t'

  access_log /var/log/nginx/global-access.log global;
  error_log /var/log/nginx/error.log warn;

  include /etc/nginx/conf.d/*.conf;
  include /etc/nginx/sites-enabled/*;

Afterward, without changing a lot of settings, nginx started to work pretty well. This and other ways to optimize nginx are written in my book, "nginx実践入門" (A practical introduction to nginx).

Configure Connection Pooling

PostgreSQL, which Mastodon uses, by nature forks a new process for every connection made to it. As a result, it's a very expensive operation to reconnect. This is the biggest difference Postgres has from MySQL.

Rails, Sidekiq, and the nodejs Streaming API all provide the ability to use a connection pool. These should be set to an appropriate value for the environment, keeping in mind the number of instances. If you suddenly increase the number of application instances to e.g. handle high load, the database server will cripple (or should I say, became crippled). For Pawoo, we're using AWS Cloud Watch to monitor the number of connections to RDS.

As the number of connections increased, our RDS instance would become more and more backed up, but it was easy to bring it back to stability just by scaling the instance size upwards. You can see that CPU usage has been swiftly quelled after maintenance events in the graph below:

RDS Graph

Increasing Process Count for Sidekiq

Mastodon uses Sidekiq to pass around messages, though it was originally designed to be a job queue. Every time someone toots, quite a few tasks are enqueued. The processing delay that comes from Sidekiq has been a big problem since launch, so finding a way to deal with this is probably the most important part of operating a large Mastodon instance.

Mastodon uses 4 queues by default (we're using a modified version with 5 queues for Pawoo - see issue):

  • default: for processing toots for display when submitted/received, etc
  • mail: for sending mail
  • push: for sending updates to other Mastodon instances
  • pull: for pulling updates from other Mastodon instances

For the push/pull queues, the service needs to contact the APIs of other Mastodon instances, so when another Mastodon instance is slow or unresponsive, this queue can become backlogged, which then causes the default queue to become backlogged. To prevent this, run a separate Sidekiq instance for each queue.

Sidekiq provides a CLI flag that lets you specify what queue to process, so we use this to run multiple instances of Sidekiq on a single server. For example, one unit file looks like this:


ExecStart=/usr/local/rbenv/shims/bundle exec sidekiq -c 40 -q default # defaultキューだけにする


The most congested queue is the default queue. Whenever a user that has a lot of followers toots, a ginormous number of tasks are dropped into the queue, so if you can't process these tasks immediately, the queue becomes backlogged and everyone notices a delay in their timeline. We're using 720 threads for processing the default queue on Pawoo, but this is a big area for introducing and discussing performance improvements in.

Changing the Instance Type

We weren't quite sure of what kind of load to expect at launch, so we decided to use a standard instance type and change it around after figuring out how Mastodon uses its resources. We started out with instances from the t- family, then switched to using the c4- family after distinguishing that heavy load was occurring every time an instance's CPU credits ran out. We're probably going to move to using spot instances in the near future to cut down costs.

Contributing to Mastodon

Now, we've been mainly trying to improve Mastodon performance by changing aspects of the infrastructure behind it, but modifying the software is the more effective way of achieving better performance. That said, several engineers here at Pixiv have been working to improve Mastodon and have submitted PRs upstream.

A list of submitted Pull Requests:

We actually even have a PR contributed by someone who's just joined the company this month fresh out of college! It's difficult to showcase all of the improvements that our engineers have made within this article, but we expect to continue to submit further improvements upstream.


We've only just begun but we expect Pawoo to keep growing as a service. Upstream has been improving at great momentum, so we expect that there will be changes to the application infrastructure in order to keep up.

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

The translator of this article can be found on Mastodon at lae@kirakiratter.

Installing Redmine 1.4 on cPanel Shared Hosting

Redmine 1.4.6 (and earlier) can be installed in a shared environment. This article will detail the easiest and most reliable method of getting a Redmine instance set up on a shared cPanel web hosting account, using mod_passenger instead of Mongrel.

I wrote this article a while ago when cPanel 11.32 was the most recent version, which used Rails 2.3.14, but most of it should still apply with cPanel 11.36 and Rails 2.3.17. Redmine 2.2 requires Rails 3.x, and as a result is not likely to be supported on shared servers (though with root access, you could set up Rails 3 in a server using cPanel, but that's beyond the scope of this article).

This article was also written for a HostGator shared hosting account, so I can't vouch for other providers like DreamHost, so please feel free to contact me letting me know if this setup works on other providers using cPanel (as I'd like to believe).

Note: Ensure that you have SSH access enabled on your account before proceeding! You will need to be somewhat familiar with using SSH before you install Redmine in any way. Please see "How do I get and use SSH access?" for more information.

Step 1 - Setup database and subdomain

Go to your cPanel (this is the only time you will need to), and create a database to be used for your Redmine. See "How do I create a MySQL database..." for more information. You can also reference the following screenshot:

Creating a Database in cPanel

We'll call ours cubecity_redmine. Be sure to save your password, as you'll need it later on.

Next, create a subdomain and point it to the public directory of where you will place your Redmine instance. We'll be using rails_apps/redmine/public in this example:

Creating a Subdomain in cPanel

Note: It is not necessary to use a subdomain - you can definitely use a subdirectory or your primary domain, just be sure to make the appropriate changes. For simplicity and ease of maintenance, we will use a subdomain in this article.

Now that we have these set up, let's start configuring our environment for Rails applications.

Step 2 - Setup your Rails environment

Connect to your account via SSH. The following should look similar to where you're at now:

A terminal after connecting via SSH

We will now want to edit our shell's environment variables, so that it knows where to find our ruby gems. You can use any text editor - we'll use nano in our examples. Type the following:

nano ~/.bash_profile

This will open up the nano editor. You will want to add or ensure that the following variables are in your .bash_profile:

export GEM_HOME=$HOME/.gem/ruby/1.8
export GEM_PATH=$GEM_HOME:/usr/lib/ruby/gems/1.8
export PATH=$PATH:$HOME/bin:$GEM_HOME/bin
export RAILS_ENV=production

The contents of .bash_profile as shown in nano

You can navigate the file using your arrow keys. Save it by pressing Ctrl+X (by pressing Ctrl and the X key at the same time). It may ask to save your changes, so press y and then click enter to save it.

After this, type the following so that your environment variables are reloaded from your profile:

source ~/.bash_profile

Now we will want to edit our rubygems configuration file. Open .gemrc in nano as you did with .bash_profile above.

gem: --remote --gen-rdoc --run-tests
gemhome: /home/cubecity/.gem/ruby/1.8
 - /home/cubecity/.gem/ruby/1.8
 - /usr/lib/ruby/gems/1.8
rdoc: --inline-source --line-numbers

The contents of .gemrc as shown in nano

If the file is empty, type all of the above. Ensure that your gempath and gemhome keys use your own username. Mine is cubecity in the above, so just replace that. Save the file using Ctrl+X after you are done.

That's it! Your environment is set up, so now let's go into downloading and installing Redmine.

Step 3 - Download and Install Redmine

Let's first move out the folder created by cPanel when we went to make a subdomain. Run the following commands:

cd rails_apps
mv redmine oldredmine

Now we want to download the latest version of Redmine 1.4. Visit the RubyForge page for Redmine and find the tarball for latest Redmine. We'll use 1.4.4 as that is the latest at this time of writing, and download it directly to the server like below:


You will then want to extract the tarball. Use the following to extract it:

tar xzvf redmine-1.4.4.tar.gz

Your session should look similar to this before you extract the file:

Terminal prior to executing the tar command

After you've finished untarring the download, rename your extracted directory to redmine using the mv command and go into that directory:

mv redmine-1.4.4 redmine
cd redmine

We will be using Bundler to install Redmine's dependencies. Bundler should be available on the shared server, but if it is not, you can locally install a copy by running gem install bundler. As we will be using MySQL, issue the following:

bundle install --without development test postgresql sqlite

Your session should now look like this:

Terminal after installing a bundle for Redmine

Redmine's installed! Now let's finish up and configure it...

Step 4 - Configure Redmine

Copy over the example database configuration provided by Redmine and start editing it, like below:

cp config/database.yml.example config/database.yml
nano config/database.yml

Edit your configuration for the production environment with the database name, user, and password you created at the beginning of this tutorial:

The contents of database.yml as shown in Nano

Press Ctrl+X to save. Now let's run our initial Rake tasks to create a secret and set up your database's tables:

rake generate_session_store
rake db:migrate

Terminal before running rake db:migrate

Finally, we will edit our .htaccess so that mod_passenger can handle requests for your Redmine instance:

nano public/.htaccess

Add the following two lines:

Options -MultiViews
RailsBaseURI /

Press Ctrl+X to save, and you're done! Visit the subdomain you created in step 1, and your Redmine installation should be handling requests as normal.

This method requires no stopping or starting of services, however if you find (very rarely this will occur) that you need to restart your app, create a restart.txt file in your application:

touch tmp/restart.txt

The application will restart the next time it is loaded in a browser.

Have fun resolving bugs!

Using SCP to Provide a Public Upload Service

Half a year ago, I wrote a poorly detailed post about setting up [a public upload site using SSH][], which used the authorized_keys file to restrict the key to an rsync command with certain flags enabled and a specified destination directory. This was pretty poorly implemented so I ended up removing the upload script and private key to prevent abuse.

Some time ago I did look into finding solutions to prevent all the mishaps that could have happened with that method. I ended up writing a pass-through bash script that basically parses the command sent to the SSH server, checks that the input is sane and then executes it.

The Implementation

To start things off, here's the result:

#!/bin/bash -fue
set -- $SSH_ORIGINAL_COMMAND # passes the SSH command to ARGV
function error() {
    if [ -z "$*" ]; then details="request not permitted"; else details="$*"; fi
    echo -e "\aERROR: $details"
if [ "$1"  != 'scp' ]; then error; fi # checks to see if remote is using scp
if [ "$2" != "-t" ]; then error; fi # checks flags for local scp to retrieve a file
if [[ "$@" == '.' ]]; then error "destination not specified"; fi # checks that the command isn't scp -t .
if [[ "$@" == ../* ]] || [[ "$@" == ./* ]] || [[ "$@" == /* ]] || [[ "$@" == */* ]] || [[ "$@" == .. ]]; then
    error "destination traverses directories"
if [[ -f "$dest" ]]; then error "file exists on server"; fi
exec scp -t "$dest"

We'll make this executable and place it at /home/johndoe/bin/ The following line should then be appended to /home/johndoe/.ssh/authorized_keys:

no-port-forwarding,no-X11-forwarding,no-pty,command="/home/johndoe/bin/" $PUBLICKEY

Of course, replace $PUBLICKEY with a valid public key (you should create a key pair that doesn't require a password to use, but that's up to you). Then, we basically use SCP to upload a file. For the script I provided above, you'll need to actually specify the destination file (because scp runs with '.' as the destination, and that could very well be the entire directory locally):

scp (-i $pathtoprivatekey) $srcfile$destfile

You can also write a script (like I have) or use an alias to make this simpler to perform on an everyday basis.

The Explanation

The comments in the script should explain what happens for the most part, but I'll reiterate here. I begin by defining the variables to be used throughout the script. $SSH_ORIGINAL_COMMAND is a variable provided by OpenSSH to the program specified by the command parameter in your authorize_keys file, which contains the command issued remotely. I use set here primarily to reduce the amount of code I have to write (it basically does the splitting of the variable for me). up is then defined to specify where files will be uploaded to (and in this case I use a directory accessible via HTTP).

The error() function is defined to return a message to the person sending a request to the server and then exit. I included an escaped alarm beep only because it seems that the 'E' disappears if I don't put anything other than an 'e' in front. (I tried removing it now and, for some reason, it works fine, so it might have just been a bug with an older SSH client from last year.)

The next two lines then check to see if the command being executed is scp with the t flag specified - this is the server counterpart to the scp command. After that's done, I use shift to remove the first two arguments.

$@ should then include the remaining arguments, which in most cases will be the destination file. Flags like the recursive flag r seem to get specified before t so this will prevent entire directories from being uploaded (it also prevents usage of the verbose flag, but you could add more logic to check). $@ is then matched against any patterns that would allow the destination to be any other directory than the one specified by the up variable defined earlier (it also matches root, but then again, you wouldn't use this script with root, would you?).

We then check to see if the destination file exists, and proceed to upload it if it does not.

Things to be Concerned About

There are a few serious issues with this approach, however. You'd want to implement a check for how much disk space is available on the server, and possibly prevent uploads if the disk is 90% full or so. The issue with this is that SCP doesn't pass any other metadata about the file being uploaded, so you can't check to see if uploading the file would cause the disk to become 100% full and cause server wide problems (of which you may not even be aware that this script could be the source of the issue).

You would also want to implement a flood check within the script. This could be pretty simple: you could have a data store that keeps track of files, dates of when the files were uploaded, and the size of files (after they were uploaded, of course), and then you could check to see how many files were uploaded in the last 30 minutes or count how much data was transferred in that time, and prevent any uploads for a limited amount of time. This could be an effective deterrent, but it won't stop floods with an extended duration (in other words, it's not difficult to write a while true loop that runs scp every minute on a randomly generated file). Since SCP doesn't even pass the IP of the uploader, you can't deny requests to certain IPs (well, I suppose you could parse netstat, but that doesn't seem like a reliable, effective method).*

In short, I would only use this to provide a service to friends and others I can trust not to abuse it. If either of those two problems have a solution I'm not aware about, I'm open to suggestions (and new knowledge, of course).

* (update 5 Mar) I just realised a week ago that SSH actually does pass the SSH_CONNECTION and SSH_CLIENT environment variables containing the sender's IP, so one should be able to track uploads via IP within the script easily. I'll see what I can do about the other issue.

[a public upload site using SSH]: /j/2012/06/25/

First Dive Into Lua - A Battery Widget

So, it's come to the point where my laptop has unexpectedly turned off from a dead battery one too many times, so I decided to write a battery widget using Vicious for the window manager I'm using, Awesome. The configuration files are all written in Lua, and honestly I've never touched Lua or felt like programming in it since it looks so...confuzzling.

Nevertheless, I took a look at the Vicious and Naughty libraries, and some Lua documentation to get this up and running:

batmon = awful.widget.progressbar()
batmon_t = awful.tooltip({ objects = { batmon.widget },})
vicious.register(batmon, vicious.widgets.bat, function (widget, args)
    batmon_t:set_text(" State: " .. args[1] .. " | Charge: " .. args[2] .. "% | Remaining: " .. args[3])
    if args[2] <= 5 then
        naughty.notify({ text="Battery is low! " .. args[2] .. " percent remaining." })
    return args[2]
end , 60, "BAT0")

What this basically does is create a progressbar widget with the Awful library, configure its settings, create a tooltip with detailed information, and registers the widget I created with Vicious. The Vicious portion of it uses the battery widget type and sets a timer to update it every 60 seconds, which updates the progressbar percentage and the tooltip. It also checks for a low battery, which for me pops up a little box at the upper right of my screen.

I'm probably not going to be touching Lua for a while again.

A Simple Bash Alarm Clock

Someone on IRC linked to a script called DEADLINE, which got me to thinking, a simple alarm clock script should be easy to concoct in bash if that's what the end goal is. I did a quick Google search but didn't find any simple solutions - they were all excessive in some way. So, here I ended up creating a bash one-liner in a few minutes to see it in practice and confirm I wasn't crazy:

sleep $(( $(date --date="7 pm Feb 23, 2012" +%s) - $(date +%s))); echo "It's been a year since you touched this, and the sky is dark! Lalala."

I could easily expand this to request a simple date and play an audio file:

printf "What time are you setting this alarm for? "
read date
echo Okay! Will ring you on $(date --date="$date").
sleep $(( $(date --date="$date" +%s) - $(date +%s) ));
echo Wake up!
while true; do
  /usr/bin/mpg123 ~/alarm.mp3
  sleep 1

This can accept date inputs like "january 1 next year", "tomorrow", "23:00 today" and so forth. In fact, one could expand this script to test for valid dates, or replace the prompt with argument parsing for the same behaviour as "Deadline." I would also probably suggest adding nohup to the script, to relay its execution from the shell and into the background after the date has been inputted.

In fact, I may update this later when I have the time with a more robust (but still short and inexpensive) version.

There are a lot of things you can do with native UNIX utilities, and I'm not quite understanding why they aren't taken more advantage of.