Welcome to my website! You can explore my entire blog below, all the way back to October 2010, or filter it to see just posts in the Robots or Photography categories.

There are also static accumulation pages that cover stuff from specific categories the blog in one place, which you can access from the header above.

If you’re here for AP Calculus notes, go to leonoverweel.com/mathnotes.
If you’re here for AP Economics notes, go to leonoverweel.com/ecomomicsnotes.
If you’re here for AP Statistics notes, go to leonoverweel.com/statisticsnotes.

PS: There are Easter eggs hidden on the static pages of this site; see if you can find them! Also, this post is sticky, newer ones are right below.

Wykki Updates: Freebase & Public Beta

By on November 10, 2014

In the beginning of September, I launched Wykki, a website that answered natural language questions by scraping data from Wikipedia data dumps.

The launch was awesome, and about 2,000 people tried Wykki out in the first 24 hours.

But seeing all the different questions people were asking made one thing very clear: my approach of scraping Wikipedia data and storing it in my own database was not efficient, accurate, or robust enough to be sustainable as Wykki’s knowledge base expanded.

That’s why, in the past two months, I rewrote all of Wykki’s code from scratch to decouple the natural language data (what a question means) from the factual data (the answer to a question); and to get data from Freebase (which is made for computers to read) instead of Wikipedia (which is made for humans to read).

So, what is Wykki now?

The best way to describe Wykki is this:

Wykki uses data from Freebase to answer natural language questions, and learns how to answer new question types over time.

In other words, Wykki answers and learns; I’ll talk about each of those separately below.

Answering Your Questions

In short, Wykki is designed to answer questions that ask about a specific Property of a certain Subject. That means it can answer questions like these:

When Wykki is asked a question, it separates the subject (“Empire State Building”) from the actual question (“What is the Structural Height of …”). Wykki knows what Freebase properties that question refers to (“Structural Height”), and then searches Freebase to return the right answer to the user.

Of course, the above questions all sound very robotic. Wykki is also robustly able to answer the same questions phrased in much more natural ways:

Note that those questions had neither proper names for the subjects nor for the properties, but that Wykki was still able to answer them. How does this work? That’s where Wykki’s learning comes in.

Learning to Understand Questions

What makes Wykki tick is user input. If Wykki comes across a question it’s never seen before, it asks the user to guide it towards the right answer, and it’ll always keep confirming the accuracy of the questions it already knows.

Learning New Questions

First, Wykki asks the user to specify the subject of the new question, so it can separate what part of the input is what.

Wykki asking for the subject of the question “How many episodes of Modern Family are there?”

Second, Wykki lists all of Freebase’s properties for that subject. The user is asked to select the property that relates to the question he or she asked.

Wykki asking for the property that relates to the question “How many episodes of … are there?”

Third and finally, Wykki answers the question–and that’s the last thing the user sees.

Wykki answering the newly learned question about Modern Family

In the background, however, Wykki has now learned to associate the question the user asked with the property the user specified. From the screenshots above, that means that, if another user asks “How many episodes of [TV Show] are there?” Wykki will be able to answer:

Wykki answering “How many episodes of Game of Thrones are there?”

This separation of questions and subjects is therefore very powerful, because it allows Wykki to learn how to answer a new type of question about every single subject that question is relevant to all at once.

Reinforcing Old Questions

Because the relationships between questions and the properties they relate to are user-generated, there are bound to be some errors. That’s why Wykki also has a scoring system that determines the “relatedness” between a question and a property.

When a user asks a question, sometimes the scoring system jumps in and asks if the answer Wykki provided was relevant to the question the user asked. The user can then simply click “yes” or “no,” and the scoring system will either increase or decrease that question/ property combo’s relatedness score.

The more confident Wykki is in the relevance of an answer, the less the scoring system shows up; the less confident Wykki is, the more it shows up.

Once the relatedness score drops below a certain threshold, the question and property are unlinked.

Other Tidbits

Here are some other things that Wykki does; some of them are new, some of them aren’t:

  • Wykki now answers in full sentences, which are constructed using the names Freebase has for the subject and property that the user asked about.
  • Wykki now has units! If a property has a unit associated with it (like meters for the height of a person, or degrees Celsius for a boiling point), that unit will show up in the answer. Big numbers (e.g. a million) will now also show up more nicely (1,000,000 vs. 1000000).
  • Everything in Wykki now works with UNICODE, so special characters such as accented letters will now no longer break anything.
  • With the new Freebase data, Wykki should be able to answer questions about over 46 million topics, and reply with over 2.6 billion facts!
  • On the back end, Wykki still runs on NDB and Jinja2 on top of Google App Engine, and is still written completely in Python.
  • The whole site is also still completely responsive, so it should look good on any device!

Wrap Up

Go give Wykki a try! The public Beta is available right now, right here: http://wykki.com.

Also, again a big shout out to the Freebase forums and Stack Overflow communities for being awesome at answering all my questions. Thanks everyone!

Be sure to follow @askwykki on Twitter.

Introducing Wykki

By on September 7, 2014

Update: Much of this post is irrelevant now that the new version of Wykki is operational. See this post for more information.

Besides my internship and an awesome family vacation to Maine and Massachusetts, I also spent some time on a new project over the summer: Wykki.

In most general terms, Wykki is a lot like a search engine, which, instead of showing a list of web results, directly answers the question you’re asking.

Wykki answering the question “Who played severus snape?”

Existing search engines (such as Google and Bing) have also recently started to develop the ability to answer similar questions–so how is Wykki different?

Two reasons:

  1. Wykki is slowly absorbing all of Wikipedia
  2. Wykki learns how to answer new question types as more people use it

Absorbing Wikipedia

Wikipedia is a very useful source of empirical information. Although the body paragraphs of its articles are sometimes changed in controversial ways (but not as much as you’d think), there is a place where its most reliable information is available in an easily digestible format: the Infotables.

Infotables are where Wikipedia stores, well, info, and you can find them at the top right of most articles. They’re all categorized and nicely formatted, which makes it easy to extract data from them. The one on the right, about Severus Snape, is an example of of an Infotable.

For my AP Computer Science final project at the end of Junior year, I made a Java program that takes the (~40 GB) monthly backup of Wikipedia, which is available for free online, and scrapes it for Infotables. It then organizes this information into one master Vocabulary file and thousands of sub-files for every article.

Wykki takes this data and imports it into a Google App Engine database, which is a lot more efficient than constantly reading and writing text files on a PC as the Java program did.

Using Python, Wykki is then able to match information to questions–if you were to type in “Severus Snape portrayer” for example, it’d match that to the “Severus Snape” Entity and its “portrayer” Property, yielding the proper result as seen above.

But how do you get from ambiguous commands like “Severus Snape portrayer” to answering natural language questions like “Who played Severus Snape?” That’s where the learning comes in.

Learning New Question Types

Every Property–be it age, nationality, height, or anything else in the left column of an Infotable–is assigned a unique 9-character identifier. “Portrayer,” for example, is assigned to P37616415.

That way, if Wykki is looking at the “Severus Snape” Entity in its database, it can find the Property marked P37616415, and know what value to return to the user.

But how does this enable learning?

Sometimes, Wykki encounters a question that it can match to an Entity, but not to a Property. An example of this would be asking “Who created Severus Snape?”–Wykki would be able to find the “Severus Snape” Entity but not be able to match “Who created [Entity]?” to a Property.

To solve this, Wykki asks for extra input from the user:

Wykki asking for more information to answer “Who created Severus Snape?”

Once the user clicks the intended Property–“creator” in the example, which links to the Property P65196977–Wykki sends through the right answer:

Wykki answering “Who created Severus Snape?”

This is where the magic happens–now that a single user has told Wykki that “Who created [Entity]?” is related to the Property P65196977, Wykki can use this in the future.

So if a different user then asks “Who created Albus Dumbledore?” Wykki knows that they’re referring to P65196977, and is able to answer right away:

Wykki answering “Who created Albus Dumbledore?”

This is a very powerful approach, because it means that Wykki does not have to learn what questions like “Who created Severus Snape?” and “Who created Albus Dumbledore?” mean individually, but can learn a more general version of the question once and apply it to many current and future Entities.

Here’s a couple of questions Wykki can already answer (they’ll open in a new window):

Website Design

The site itself consists a quite simple white box on a gray background, with a header and footer wrapped around it.

The box consists of four parts:

  1. The top message (small, 1em size)
  2. The middle message (large, 1.7em size)
  3. The bottom message (small, 1em size)
  4. The input box

Wykki can populate all four of those from the server, or choose not to populate one or two of the messages, which makes them disappear without breaking the layout.

The site is also adaptive, changing easily from a layout for a 27″ desktop monitor to one for a 5″ smartphone screen. (Try going to http://wykki.com/ and resizing your window!)

Under the Hood

Some of this has come up before, but here’s a quick rundown on the different things that power Wykki:

The main code is about 600 lines of Python (excluding comments and spacing), the HTML is about 35, and the CSS is about 100.

What’s Next

As you can see from the site and screenshots, Wykki is in Alpha right now. Why is that? Mostly because I’ve only imported about 10,000 Entities from Wikipedia so far, which means Wykki can answer “How many episodes of Friends are there?” but not “How many episodes of Modern Family are there?”

The problem here is that importing data from Wikipedia is fairly slow, and I’m limited by how much I can upload to Google App Engine every day. Plus Wikipedia data is constantly being updated, so I’d go out of sync.

What I’ll probably end up doing is rewriting most of the code to be a layer on top of Freebase (a free database of information that knows hundreds of millions of facts), where I won’t have to import the actual information, but just link the different Properties to my dictionary of questions. That way my database would only have to contain natural language data (as in, what questions relate to what Properties), while Freebase can be changed, updated, and grown without me having to account for the changes.

Another system, for which the ground layers are already there, but which just needs to get an interface, is the scoring system. The scoring system evaluates the “relatedness” between learned questions and their Properties over time, which is needed to prevent (accidental or intentional) linkage between unrelated Properties and questions.

Wrap Up

So yeah, that’s Wykki. It’s been a really fun project so far, and I’ve learned a ton (shout out to Codecademy for teaching me Python and to all the Stack Overflow users who have answered my many questions).

I’m hoping to keep working on Wykki and implementing the stuff above ASAP (but I just started Senior year, so it might take a while).

So, what are you waiting for? Head over to http://wykki.com/ and give it a try!

For questions regarding Wykky, email ask@wykki.com, or tweet @askwykki.

Posted in: Programming, Wykki | Tags: , , | 3 comments

TU Delft Summer Internship: 3mxl Control Table XML Interface

By on July 31, 2014

Long time no write! Between SATs, APs and SAT IIs, my junior year at Rye High has been extremely busy–so I haven’t been able to build too many robots lately. I have, however, been working on a couple of long-term projects. I’ll have updates up on those soon.

Now that school has ended, though, I’ve got a lot more time on my hands, and I’ve spent a lot of it at a month-long internship at the TU Delft Robotics Institute in (you guessed it) Delft, the Netherlands. I got the opportunity to intern there thanks to my Science Research mentor, Guus Liqui Lung, who is Research Engineer at the university’s Biomechanical Engineering group.

What I’ve been doing

The “purpose” of my internship there was for me to learn about using ROS (Robot Operating System). ROS is a big open source project that consists of a collection of interoperating drives, nodes and packages that allow you to add a bunch of functionality to all kinds of robots; I’m interested in ROS because I’d like to integrate the library I’ve been developing in Science Research into it (more on that soon).

The robotic wheel platform I tested my code on at the internship

The Robotics Institute at the TU has developed a board called the 3mxl, which integrates up to 253 daisy chained motors into ROS using the Dynamixel protocol. The board keeps a 256 byte “control table” for each motor, which contains user-set parameters to make the motor behave as it should (things like the gearbox ratio, encoder resolution, motor constant, wheel diameter, spring stiffness, etc.).

This is extremely useful for abstracting away a lot of the lower level functionality, but it is kind of hard to edit, which is where the project I worked on comes in: I made an easy way for both first-time users (Minor students) and experienced users to quickly view and edit the data contained in the control table through a combination of XML files and a custom RQT-based GUI.

The startup screen of my GUI, before the program scans for motors on the bus

The basic premise behind my program is that it scans the 3mxl board for motors, and then, for each one it finds, reads out relevant parts of the control table (using either existing get methods, single byte reading, or bitwise shifts to combine low and high bytes for longer data types). It then applies appropriate type changes and unit conversion factors, and finally writes the parameters to both a standards-compliant XML file and to an external struct (maintained by my library) that allows both the GUI and user-made programs to access the control table data.

The GUI in editing mode

Through the GUI, the user can then edit the parameters in the struct (which was read from the current control table on the 3mxl when the XML file was generated, or imported from a GUI-made XML), and write them back to the 3mxl.

Whenever that happens, the most recent changes are used to update the XML file, which looks something like this:

<control table>
<motor id="106">
<constant name="motorEncoderDir">1</constant>
<constant name="motorGearBoxRatio">19.700</constant>
<motor id="107">
<constant name="motorEncoderDir">0</constant>
<constant name="motorGearBoxRatio">6.200</constant>
</control table>

The XML file is completely portable, so a file generated on one computer-robot combination can be used to import parameters to any other computer-robot combination (provided that it has the same amount of motors, and the motors have the same IDs). All this functionality is also available as a library, so the user can, in a few lines of code, parse the XML file at the beginning of their code without having to go through the GUI.

Because of that, the user can change the parameters just by editing the XML file (by hand or through the GUI), and make changes to the control table without having to slowly write and recompile any code–particularly useful for tuning a PID algorithm, for example.

The GUI after having updated the motor’s PID gains.

Being able to import and export data between XML files and the control table also adds the advantage that someone who knows what they’re doing (e.g. my mentor) can generate a couple of standard XML files that the students can then import (and slightly tweak if necessary) and load onto their robots very easily without needing much prior knowledge.

The binaries of the two packages (the XML generation/ parsing backend and the GUI front end) are now available on TU Delft’s private SVN for Minor students to start using next year.

The Takeaway

It’s been awesome working in the lab because, even though I was testing my code on a lowly wheel base, I could see the really cool, really complex, really big (and really secret) robots my code can be run on to make the researchers’ lives a little bit easier.

It was also nice to be working with C++ again–which I hadn’t done since I took an online class on it sophomore year–and I learned a lot of new stuff, from all the 3mxl interface stuff to structs to bitwise operations to Ct GUIs and to what the hell a CMake file is.

And, of course, I got a pretty good understanding of ROS and how to interface with it, which I’ll be able to apply to my own Science Research project next year at school–more on that in a later post.

Overall, it was a great month. :D

PS: More updates coming soon; I’m also going to start looking into using ROS with the LEGO Mindstorms NXT motors and sensors I already have–probably with a UDOO Mini PC and an Arduino board to interface it with the NXT stuff.

Dual Axis Motorized RC Camera Rig for Lumia 928

By on May 24, 2014

I got a new phone a couple of weeks ago: the Nokia Lumia 928. It has a great camera, so I decided to build a rig for panning and tilting it.

The robot features two degrees of freedom that can be manipulated independently, is controlled through Bluetooth via an easy to use remote, and rotates the phone exactly around where the lens is located. I’ll get into all those things individually after the video; building instructions are at the bottom of the page.

Lumia Rig demo video


The robot itself has two motors. The first moves a single small gear that spins the turntable and pans the phone. The second is a little more complex; it drives a worm wheel through the center of the turntable, which, through another series of gears, is responsible for tilting.

To get rid of the obnoxiously high amount of vibrations caused by the Mindstorms NXT motors at high speeds, the entire base rests on the rubber shock absorbers that come with the NXT set.

The remote is very minimalistic, featuring a symmetrical design with two motors on the sides that are used to input remote control data into the system, and nothing else. Both follow the red with shades of gray color scheme I’ve been using for most of my robots lately.


The remote and the rig are both programmed in RobotC and connected via Bluetooth; three types of signals (control mode, motor A speed, motor B speed) are encoded and sent between them 33 times a second.

The “control mode” variable tells the rig what mode the remote is in: you can either control the position, where the rig quickly moves the same amount of degrees the remote control wheels are turned, or the speed, where the dials control how fast the motors use. The other two variables then tell the motors how much to move.

Because of the hardware design, if the base were to spin while the tilting motor stands still, the camera would still tilt (the gears would rotate around the worm wheel and spin themselves). To compensate for that, the remote makes sure the worm wheel is always spinning at 1/7th the speed of the panning speed; that speed is then increased or decreased to tilt. This allows for truly individual control over the different functions of the rig.

Build Your Own

If you’ve got a Lumia 928 and would like to build this robot yourself, you can use the resources below.

If you have a different phone: the robot was designed to be adjustable for different sized phones fairly easily.

UPDATE: My friend over at One Mindstorm has created a version of the Lumia Rig optimized for iPhones. You can check it out here for building and programming instructions.

If you have any questions, tweet them to me and I’ll get back to you ASAP.

NYC Chinatown & Little Italy Cinemagraphs

By on July 21, 2013

I went to the city with my parents a couple of days ago. I took lots of regular DSLR pictures, but also tried out taking Cinemagraphs on my phone for the first time:

Lots of people sitting around here, mostly playing board games and cards. Some tables had quite the crowd of spectators around them.

View outside the restaurant where we ate.

Steaming hot pots of food at another restaurant we walked by.

The Little Italy sign changed colors every couple of seconds.

Two cowboys having a drink at a Little Italy bar.

RHS Science Research Symposium Flyer Robot

By on June 25, 2013

The RHS Science Research Symposium Flyer Robot, about to hand out a program.

A couple of weeks ago, we had our annual Science Research symposium, where Rye High School students show visitors the research they’ve done over the year in either a poster or PowerPoint presentation.

A poster I drew for the symposium, featuring illustrations for everyone’s research.

For my research, which I’ll share more about later, I built a demo robot to hand out the flyers/ programs to the people who came by. See the video I made of it below:

Device-wise, the robot consists of the following functional parts:

  • 2 Mindstorms NXT Intelligent Bricks (1.0 & 2.0)
  • 2 100mm Firgelli Linear Actuators ~
  • 2 Mindstorms NXT Motors
  • 1 LEGO Technic Small Motor
  • 4 Mindsensors Flexi-Cables for NXT (1 meter long & 1.5 meters long) ~
  • 1 Custom-made NXT-Technic Motor cable ~
  • 3 Touch Sensors
  • 1 Ultrasonic Sensor

As for how it worked, the process was fairly simple; it’s explained in the gallery below:

After the Ultrasonic Sensor registers a new visitor, the Master NXT (“Jeeves”) sends a Bluetooth signal to the Slave NXT (“Alfred”), which would then turn four wheels that push the bottom program into the arm’s gripper.

Then, Alfred closes the gripper to grab the program once its pushed forward enough to hit a touch sensor (that in-focus gray axle extends into the sensor.)

Once a Bluetooth signal is sent back when Alfred finishes, Jeeves uses the Linear Actuators to move the flyer to the desired position.

A second joint, also controlled by Jeeves, assists in handing out the flyer.

Finally, Jeeves once agains signals Alfred to let go of the program, and the arm returns to its default position, assisted by two more touch sensors used to calibrate different parts of the arm.

Originally, I’d also planned to use an IMU to measure when the visitor grabs a program and only let go of it then; due to time constraints, however, I decided to just use a two second timer. The arm itself, as you can see in the video, is pretty shaky too, so it would have been hard to filter out those vibrations from those caused by a potential visitor grabbing the program.

Overall, though, I’m happy with how the arm turned out, and I’m glad it attracted some people to my poster.

Celebrating 50+ FCCYSF Videos and 50+ Subs

By on June 24, 2013

About a year ago, I started FCCYSF, a YouTube channel to host Creative Commons stock footage available for anyone to use completely for free, with neither upfront nor royalty costs.

Some featured FCCYSF videos

I’m proud to say the channel has recently passed 50 subscribers, and has over 50 videos uploaded. Right now, I’ve started contacting more people possibly interested in contributing. Other than myself, there is one so far.

If you’re interested in becoming a contributor (you’ll get a 70% cut of the advertisement profits made on your footage), or know anyone who might be, drop me an email at fccysf@gmail.com.

The Minerals Page for Just Over Art

By on June 18, 2013

My mom has recently started using epoxy resin to make multi-layered paintings, and they look great. Some of her newest ones are centered around different types of rocks and minerals, which are what the page on her site I made showcases.

Screenshotting it hasn’t given me any good results due to scaling and such, but you can check it out right here. Here’s a picture of one of the paintings instead:
One of Lisette Overweel's Minerals paintingsOne of Lisette Overweel’s Minerals paintings, as seen on her site.

The basic idea behind the page is that there’s a static background that can adapt to anything from a 2560p Cinema Display, to a 16:9 Surface held vertically to a smartphone; the content scrolls in front of it, boxed in by a white background.

The background also adapts to browsers that don’t allow for it to be static, and instead duplicates itself. The way that works without being ugly/ inconsistent is that every other one is vertically flipped, so that there are no harsh breaks between the tops and bottoms of any two occurrences of the image.

Posted in: Web | Tags: , | Leave a comment

© 2009-2013

Unless otherwise specified, all rights reserved to Leon Overweel.