GSoC 2018: Merged
Two more weeks have passed, or maybe almost three, during which I have:
- learned more about processing data fetched from APIs using jQuery promises in an OOUI app — which I will tell you all about, whether you like it or not.
- learned to handle some Git issues specific to the Gerrit workflow — which I will tell you nothing about, even though you’re probably so very curious.
- had some patches merged into the code base for the first time.
Let us rejoice.
New adventures with OOUI
As you might recall, MediaWiki utilizes an object oriented JavaScript library for UI components, particularly widgets, called OOUI. This isn’t a UI library like Twitter Bootstrap or Semantic UI, where you have pre-made CSS classes that you can add to your HTML tags. In OOUI, each widget is a class in the Object Oriented sense of the word, so basically we’re dealing with pre-built elements, each being a widget or component that can be extended (a button, for instance). When building a very simple app you can just append a bunch of widgets to a single DOM element, like the wrapper div below, and that’s all you’ll need in the body of your index.html file.
As the <h1> above indicates, my practice code was originally a ToDo List app. Currently it is more of a general purpose sandbox. In order to avoid clutter I’ve decided at some point to separate it into two apps. The second app will focus on all the API stuff. I might share more code snippets from these practice projects soon.
Like, maybe even right now.
Promise more than you can deliver
Scroll down a bit and you will find an example of a widget named ToDoItemWidget, that uses two getJSON calls. What I’ve learned from writing this piece of code was how to use the jQuery .when() method (line 27) in order to group together the returned results of more than one promise (in this case, only two), and how to use Deferred to override .when()’s default behavior. By default, if one (or more) of the single promises fails, then the entire group of promises fails. We don’t necessarily want that to happen! Sometimes we want to display whatever was fetched, even if we couldn’t get all the data we asked for.
So what we have here below are two getJSON promises. The first one, getJSON1() (line 1), sends a request to the github user API, fetches my user data and returns my name (data.name), while the other, getJSON2() (line 12), gets random pics from Flickr and does nothing at the moment. They each log their respective data object onto the console (line 7, line 15). I could have used my getJSON2() function for displaying a photo (or several photos) upon search or button click, if I wanted, and define bothPromised() (line 27) to append data from both APIs into one component.
getJSON1() could be used to fetch lots of other pieces of data, such as my repos info, my profile picture, etc. It could also be made to fetch the data of other users, but not with the current API call. For that you might want to use something like
url: ‘https://api.github.com/users/’+username, create a variable called username and set it to store whatever you type in an input field. You would also need to get a client ID and a client secret.
The first merge
As I continue my work with the practice code I get closer to the real coding style used in Wikimedia’s projects. It will soon be time to put these skills and knowledge into action with the new filters feature. Until then, I’ve been taking on smaller tasks, and have recently had some patches merged into the code base. Here are a few of them:
- Adjust layout for saved filters empty state
- Update feedbacklink text
- Use the trash icon in the saved filters menu
- Using ‘trash’ icon instead of misleading ‘clear’ and changing ‘remove’ to ‘delete’ accordingly.
These changes may not be big, but they will get to millions of unsuspecting users all over the world, and will affect the way they all see and use Wikipedia. Crazy, right?
Let’s party!
The workflow and the review process
Working in an orderly manner is a great skill I’m acquiring these days. I hope to sustain these new habits in the long run. I’m more accustomed to the workflow and review process now, and I started enjoying the work rhythm, which makes me feel more confidant of my productivity and usefulness. Still, we all seek external validation and approval, don’t we? And did you know that bots can provide you with this sort of attention? It’s true. Whenever a bot sends me an email to notify me that I did good, it is a happy moment. No ESLint errors. No tests breaking. Verified +2. The best kind of verified.
Of course, this is not a final approval in any way. This is just the first step of the review. Naturally, all new code must pass human review as well (that’s how things work in nature). This is particularly important when something is wrong. An automated test can’t inform you that you should add some inline documentation, for instance. It can’t tell you that it would be a good idea to explain where this or that number is coming from, for instance — was it the result of a calculation? What does this calculation represent? Was it performed by and automatic process, or by you? Lots of different people people work on this codebase, so there are pretty strict guidelines meant to keep you from making the big fat mess you would otherwise love to make.
The incredible thing is, that these rules and guidelines are actually enforced. Shocking, I know. But otherwise, in a year or so, when some other developer looks at my code, she’ll be scratching her head in confusion and horror, and no one wants that to happen. Because excessive scratching can be unsanitary.
Well, good luck to you, future developer. I hope you’ll find my inline comments meaningful and informative. May the source be with you.