Developer toolbox
Innovative tips, tricks, tools to improve your workflow.

Functional Programming Feat...

I’ve been exploring functional programming with Scala and its eco system for the past few months. In this post, I’ll highlight some of the features of the language that enable intuitive creation of functional code for distributed systems and data operations. Photo by apoorv mittal on Unsplash Higher Order Functions As per the official documentation, Functions are first class objects in Scala, which means that they can: Take another function as an argument, or … Return a function An example of a function taking another function as an argument is the map() function in Scala's standard collections library. val examplelist: List[Int] = List(2,9,8,14) examplelist.map(x=> x * 2) // anonymous function as argument When working with standard Scala collections, it’s also very intuitive to chain operators, especially with the infix notation. In the small code example below, I’m defining a list of numbers from 1 to 20, filtering on even numbers and then summing them up. (1 to 20).toList filter (_%2 == 0) reduce (_ + _) The _ is the wildcard operator - in the case of maps and filters, it refers to the value in the collection. Recursion The recommended way to do operations on all the items in a collection is to use the operators map, flatMap, or reduce. In case those operators don’t meet a use case’s requirements, it’s very useful to write a tail-recursive function to operate on all the items in a collection. The code example below shows a tail-recursive function definition to compute the factorial of a number. import scala.annotation.tailrec @tailrec // Factorial Function Implementation that uses Tail Recursion def factorial(in_x: Double, prodsofar: Double = 1.0): Double = { if (in_x==0) prodsofar else factorial(in_x-1, prodsofar*in_x) } factorial(5) In Scala, a tail-recursive function as shown above can be optimised by the compiler (using the @tailrec annotation above) to occupy just 1 stack frame, so there's no chance of a stackoverflow error even for many levels of recursion. This is possible out-of-the-box, without any need for frameworks or plugins. As mentioned above, the recommended way is to use to collections operators (such as reduce, etc.). As a demo of the ease of use of the collection's APIs, the above factorial function can also be implemented by the 1-liner below: (1 to 5).toList reduce (_*_) To conceptually understand reduce, check out this great link! (Also do check out the explanations of foldLeft, foldRight, map, flatMap to understand some commonly used data operations!) Photo by chuttersnap on Unsplash Case Classes Case classes can be instantiated very easily with no boiler plate code, such as the example below. case class BusinessTransaction( sourceaccountid: Long, targetaccountid: Long, amount: Long ) // create some transactions now to demo case classes // I lend my friend val 1_xaction = BusinessTransaction(112333L, 998882L, 20L) // My friend pays me back val 2_xaction = BusinessTransaction(998882L, 112333L, 20L) Just 1 case class .. line above does the following useful things: Defines the 3 immutable values sourceaccountid, targetaccountid and amount Defines get methods to access the constructor arguments (eg: 1_xaction.amount) While the ease of use is great, case classes are the recommended way to store immutable data instances in Scala. For example, in a big data application, each line of a large datafile can be modelled by a case class and stored. An example of the use of a case class to store data is here. In the linked example, the function rawPostings models each line of the datafile as an instance of case class Posting. It will eventually return a dataset of type RDD[Posting]. Pattern Matching In Scala, objects such as case classes, regular classes, and collections can be decomposed through pattern matching. Essentially, this means that you can use pattern matching to: Decompose an object’s type (example below) Get the head of a collection (such as a List or a Seq) The code example below shows how to use pattern matching to decompose a Seq. val seq1: Seq[Int] = Seq(1,3,4,5,5) seq1 match { case x::y => println(s"The first element in the sequence is ${x}") case Nil => println("The sequence is empty") } The cons operator (::) creates a list made of the head ( x) and the rest of the list (called the tail, y). Photo by Samuel Zeller on Unsplash Companion Objects In OOP, a static variable is sometimes used in a class to store state or property across multiple instantiated objects. However, there is no static keyword in Scala. Instead, what we use are Companion Objects aka Singleton Objects. A Companion Object is defined using the object keyword and has the exact same name as its accompanying class. The companion objects can define immutable values, which can then be referenced by methods in the class. There are 2 common patterns to use companion objects in Scala: As a factory method To provide functionality that is common to the class (i.e. static function in Java) // The 'val specie' straightaway defines an immmutable class parameter abstract class Animal(val specie: String) { import Animal._ // Common Behaviour to be mixed-in to Canine/Feline classes def getConnectionParameters: String = Animal.connectionParameter } object Animal { // .apply() is the factory method def apply(specie: String): Animal = specie match { case "dog" => new Canine(specie) case "cat" => new Feline(specie) } val connectionParameter:String = System.getProperty("user.dir") } class Canine(override val specie: String) extends Animal(specie) { override def toString: String = s"Canine of specie ${specie}" } class Feline(override val specie: String) extends Animal(specie) { override def toString: String = s"Feline of specie ${specie}" } // syntactic sugar, where we don't have to say new Animal val doggy = Animal("dog") val kitty = Animal("cat") doggy.getConnectionParameters Options Most application code checks for Null/None types. Null types are handled a little differently in Scala — the construct used is called an Option. This is best demonstrated with an example. val customermap: Map[Int, String] = Map( 11-> "CustomerA", 22->"CustomerB", 33->"CustomerC" ) customermap.get(11) // Map's get() returns an Option[String] customermap.get(11).get // Option's get returns the String customermap.get(999).get // Will throw a NoSuchElementException customermap.get(999).getOrElse(0) // Will return a 0 instead of throwing an exception In a language like Python, if None: checks would be quite common throughout the codebase. In Java, there would be try-catch blocks that would handle thrown exceptions. Option s allow for focusing on the logic flow with minimal diversions for type or exception checks. A standard way of using Options in Scala is for your custom functions to return Option[String] (or Int, Long, etc.). Let's look at the Map structure's get() function signature: def get(key: A): Option[B] One (intuitive) way to use this is to chain it with the getOrElse() function as shown below: // Map of IDs vs Names val customermap: Map[Int, String] = Map( 11-> "CustomerA", 22->"CustomerB", 33->"CustomerC" ) customermap.get(11).getOrElse("No customer found for the provided ID") A very useful way of using Options is together with a collection operator like flatMap that directly handles the types for you transparently. // Map of IDs vs Names val customermap: Map[Int, String] = Map( 11-> "CustomerA", 22->"CustomerB", 33->"CustomerC" ) val listofids: List[Int] = List(11,22,33,99) listofids flatMap (id=> customermap.get(id)) //flatMap magic And that’s it from me! My next excursion is to explore concurrent systems with Akka and the Actor model. Look out for a future post, where I’ll share my learnings on that topic (and it’s relationship to Scala’s approach to functional programming). Originally published at http://github.com.

Speed Up Your JavaSc...

For a recent personal project, I had only needed a fairly simple node.js server to do exponential and costly computing tasks. To be honest, I could have switched the entire tech stack, but I estimated that the development time of such a choice wasn’t worth it… Still, I had some functions taking ages to compute. So I had a look around, and decided to let that task be handled by a more appropriate language, in this case Rust. This choice made me dedicate the good task to the good language, as my other-than-that simple server just had to handle routes and calls, which node does with ease, and dedicate all the tedious calculus to Rust. What is Rust? Rust is a low level, safety-focused language designed by Mozilla and has topped the StackOverflow Developer Survey as the most loved programming language for four years in a row. It runs blazingly fast when compared to most other languages, often even slightly faster than C itself. It’s neither a functional nor an object oriented language, and its syntax is close to C++. Its npm equivalent is called Cargo, and the packages are named crates. How Do You Mix Rust with NodeJs? Fortunately for me, I wasn’t the first person who wished to mix Rust with NodeJs. This has been handled by far more talented people, through what is called a Foreign Function Interface (FFI) and Dynamic Libraries (these files that end in .dylib for example). This allows a program that runs in one language (the host language) to call functions written in another, (the guest language), just like what we’d call a host language library. And inside the guest languages functions, you can have access to whichever useful guest languages in any third party-library ! So How Do We Write It? Let’s get started with the basics then. First we will need Rust and Cargo: curl https://sh.rustup.rs -sSf | sh Once we’re done, we can create a new project, in this case a library: cargo new --lib <libraryname> This set ups a new directory with a src folder, and a Cargo.toml (which is the equivalent of the package.json). Now that we’re set up, let’s write our code: For this example, in order to keep it simple yet explicit, we’ll just create a recursive Fibonacci number function. It is very simple, commonly used for benchmarking (it runs as O(2^n)), and makes use of tail-recursion, which is quite a limitation for JavaScript (even more on the front-end as very few browsers support it). So let’s open src/lib.rs and write our function: fn fibonacci(n: u64) -> u64 { if n <= 2 { return 1 } return fibonacci(n-1) + fibonacci(n-2); } #[no_mangle] pub extern "C" fn fibonacci(n: u64) -> u64 { if n <= 2 { return 1 } return fibonacci(n-1) + fibonacci(n-2); } The first function would be how to write it if it was destined for the same Rust program. However, we are building a dynamic library so we need to make a few changes that I am going to review: #[no_mangle] This line is a macro: it gives instructions in order to modify the program at compile-time. In this case, we prevent the compiler from changing the name of the function through name mangling. In short, name mangling is the way your compiler renames functions in order to make sure they call the correct one (differentiate List.get() from Array.get() for example). The output often looks like _Z4xfunction1HgF7jffibonacci for example. But this would be the name we have to call from node, so we want to keep it simple. Then we have pub. This means that the function is publicly available to use, so we can call it from outside this module. Finally, the extern "C". This indicates that we are using the C ABI(Application Binary interface). In our case, we can remove this, as Rust uses the C ABI by default. We can use it to specify if we're targeting other foreign API calling conventions, such as Windows API. We can then compile and try it within a small node app. We’ll do this with the --release flag as we want Rust to optimise the binary (without instruction, Cargo will build it in debug mode which can be surprisingly slow). cargo build --release This will create a lib.dylib in ./target In a node js app, let’s try to call our function. For this, we will need node-ffi : npm i ffi In our code, let’s import our dynamic library. This will result in a var, similarly to a require(); After giving the path to the library, we also need to indicate the functions we wish to import from this library, specifying the return type and the parameters they take. var lib = ffi.Library(path.join(__dirname, './target/release/libdemo-rust-node.dylib'), { fibonacci: ['int', ['int']], killorface: ['int', ['int']] }); A function like pow taking a double and an integer, and returning a double would be imported like this : pow: [ 'double', [ 'double', 'int' ] ] We’ll declare an equivalent function in js and call both of them with a console.time to benchmark them: var ffi = require('ffi'); var path = require('path') var lib = ffi.Library(path.join(__dirname, './target/release/libdemo-rust-node.dylib'), { fibonacci: ['int', ['int']], }); function fibonacci(n) { if (n <= 2) { return 1; } return fibonacci(n - 1) + fibonacci(n - 2); } console.time() var rustFibonacci = lib.fibonacci(30); console.timeEnd() console.time() var nodeFibonacci = fibonacci(30); console.timeEnd() console.log(rustFibonacci, nodeFibonacci) Let’s run it : user$ node index.js default: 2.850ms default: 10.805ms 832040 832040 As we can see, both returned the same result. However, we can see a noticeable difference in computing time. Keep in mind, however, that this microbenchmark does not account for the loading time of the library. Restrictions There are still some restrictions to using FFIs. First, keep in mind that an FFI call is very costly… as stated in the readme of ffi-node for example : There is non-trivial overhead associated with FFI calls. Comparing a hard-coded binding version of strtoul() to an FFI version of strtoul() shows that the native hard-coded binding is orders of magnitude faster. So don't just use the C version of a function just because it's faster. There's a significant cost in FFI calls, so make them worth it. If you’re loading a dynamic library, this also comes at a cost, and you might not reach the presumed performance. Also, if you’re only looking after low-level and optimised code, the best you can make do is to load the library only once for many uses, otherwise a C extension would be better. Precisions: This example being trivial, it only uses simple integer types in the functions. If you’re looking to work with JavaScript objects and types directly in Rust, have a look at Neon. More importantly, you can’t make code running in a browser handle such calls… How About Front-End Then ? You might have heard about WebAssembly (wasm). It is a stack machine aiming to execute at native speed C/C++ or other fast languages in JavaScript. In fact, in a very short time, it implements what we did previously at a higher level, and using cross-languages standards. Rust makes the wasm module build a core of itself. You can write and publish your npm module in Rust. You can also install it and run it through most modules bundler, though the popular WebPack is the most documented of them all. Let’s have a quick tour on how to proceed with the previous example: First we install wasm-pack in order to compile and produce a npm package from our code. $ cargo install wasm-pack Also, in order to publish our package, we’ll assume you have an npm accountalready set up. Then, let’s create the project: $ cargo new --lib wasm-fibo Created library `wasm-fibo` project In the Cargo.toml, we need to add a few things: [lib] crate-type = ["cdylib"] [dependencies] wasm-bindgen = "0.2" In the newly generated src/lib.rs, let’s write our function: extern crate wasm_bindgen; use wasm_bindgen::prelude::*; #[wasm_bindgen] pub fn fibonacci(n: i32) -> i32 { if n < 2 { return 1 } else { return fibonacci(n - 1) + fibonacci( n - 2) } } We are using wasm_bindgen, a bridge between Rust and JavaScript. It will take care of the mangling problem, but we still need to state our function as publicly available with the pub keyword in front. Let’s build our package now: $ wasm-pack build [--scope <mynpmusername>] --release That creates a pkg directory at the root with a lot of different files in it. They bundle everything we need to know about your package, which function it exports, what are the arguments they require, and what types to return. Now, let’s publish it and use it: $ cd ./pkg && npm publish Now, in our webpack application, we’ll just have to install it through npm: $ npm i [@<mynpmusername>]/wasm_fibo (If you published it with your username in scope, you will need to carry it in all imports.) Done! Now we can use it as any other npm package, here with the es6 syntax: import { fibonacci } from "wasm-fibo"; console.log('This is wasmfresult : ', fibonacci(23)); To Conclude FFIs or WebAssembly are two practical solutions for faster processing without having a huge payback in development time and comfort, giving you more time to develop in a higher level language, while still having the right tool handling the right work and giving you access to a library that doesn’t exist in your host language. Between the two, the nuances can be subtle. In short, WebAssembly is platform agnostic: it runs in browsers, servers, inside PHP, anywhere. WebAssembly exports symbols, like functions, or memories. You can call those functions from the outside. Other than running environment, wasm module makes it possible to interact with JavaScript functions such as the console or alert() easily, but it also has some limitations, including those of the bundler and the browser you use if run in front-end. Most of the time, if not carefully designed, the run time performance gain is very small for a single call and not as fast as FFI calls (outside loading). In both cases, there is a payoff on calling external functions. A single call to a very “heavy” function will often be worth with a FFI, whereas the WebAssembly solutions will pay off from at least a moderate amount of calls.

How I Get Projects G...

In this article, I’ll be demonstrating my workflow to get started on a project — by setting up an AWS EMR Cluster using a Cloudformation template. I’ll first introduce both the Spark app and the Cloudformation template I’ll be using. I’ll then deploy my demonstration Spark app’s Assembly Jar to an S3 bucket, before running the app on the EMR cluster. Finally, I’ll query the Hive external table created on the cluster using Hue. As a managed service, I find EMR to be a great option to spin up a cluster to get started right away — and it provides powerful computing functionality, as well as flexible storage options. Before I go on, I’ll list down the dependencies that I need before I can continue —  * AWS account with root access (preferably) * IDE and Unix Terminal * The AWS EMR Pricing guide! — to determine the most cost-effective region to host my EMR cluster. About The Spark App For this blog post, I’ll use my simple demonstration app hosted on Github at simple-spark-project, which will - * Read in an apache web server log file into an RDD[String]structure * Define a schema, and convert the above RDD[String] into a DataFrame * Create temp view and then an External Hive Table to be stored in an S3 bucket The app takes in 3 input parameters - * S3 URL of the apache web server log file * S3 URL of the output S3 bucket * The spark.master configuration (which will be ‘local’ in this example) The build configuration file in the project defines the libraryDependenciesand the assemblyMergeStrategy to build the Assembly/Uber Jar — which is what will be executed on the EMR cluster. Build Infrastructure Using Cloudformation The Cloudformation template I’ll use defines and configures - * IAM Roles to deploy and read to/from S3 buckets * IAM Users to read to/from S3 buckets * 3 x S3 buckets to host the Assembly Jar, Hive external table and the EMR Cluster log * The EMR Cluster Fig. 1 — Cloudformation template To create the Stack, I’ll follow these steps - 1. Navigate to Services > Compute > EC2 > Key pairs and create a Key Pair. Download the .pem file. 2. Navigate to the Cloudformation service from the AWS console 3. Open Cloudformation Designer (via the button ‘Design template’) 4. Open the template 5. Validate and Create the Stack The template will output the KeyId and the SecretKey of the newly created IAM users, who will be named DataArchUser-* and DataArchAdmin-*. Create Steps On The EMR Cluster To create a Step on the cluster, I’ll navigate to Services > EMR > Clusters and add a Spark application step in the ‘Steps’ tab of my cluster. Fig. 2 — Step configuration to add Spark application Check Output Hive External Table Once the Spark app is completed, I can query the final Hive external table in Hue using HiveQL. Fig. 3 — Querying the external table in Hive And that’s it! I now have a working cluster that I can now use to develop and run more complex applications. This isn’t a production-grade cluster, but it is one you can quickly spin up to begin work on a new project.

Pair-Programming: A ...

In the blog post Pair Programming: A Developer’s Perspective, Mingwei helpfully teased out the benefits and pitfalls of the practice of Pair Programming. Typically, Pair Programming is viewed as the Development Team’s concern -the kind of stuff only techies would and should care about. So, if you happen to be the Scrum Master and your team members are advocating this, you’ll have your work cut out for you. But still work that is worth your time, nonetheless. So here are a Scrum Master’s perspectives on Pair Programming. Who Are You and How Are You Involved ? You are the Scrum Master. The servant-leader who coaches the Development Team in self-organisation. You are adamant Pair Programming is good for delivery in the long run. But if the team is not considering this, you worry that enforcing this practice will interfere with their autonomy. After all, you have the Scrum rules carved into your heart. Remember that bit about “No one (not even the Scrum Master) tells the Development Team how to turn Product Backlog into increments of potentially releasable functionality?” To push for Pair Programming is similar to telling them just that. You fear breaking the Scrum rules. Your conscience is killing you. All is not lost, though. There is another point we can make about a Scrum Master’s service to the organisation, namely “causing change that increases the productivity of the Scrum Team”. If you are convinced Pair Programming increases productivity, it would be good to first identify what concrete productivity problems are happening, and connect that to how Pair Programming can help, while keeping in mind our principle is to be as lightweight as possible in our process. It is not about mindlessly implementing every promoted idea and selling them as “good practice.” “If there is no problem to be solved, then there is no point implementing a solution.” Pair Programming is a solution to problems. The one over-arching smell which Pair Programming might help (note the word “might” because problems could run deeper) is that of a silo’ed knowledge of the domain/system/code within the team. This smell manifests itself in symptoms like: bottleneck in team due to high inter-dependency between team members uneven spread of work within the team in an iteration/sprint fear of changing unfamiliar parts of the system drop in team’s engineering practices over time inability of some team members to estimate a piece of work lack of confidence in supporting a production system low bus factor in the team (also known as panic in the team when a critical team member goes on a really long vacation or worst — “you better be on call while holidaying in a beautiful remote island.” ) Circling back to the Scrum Master’s role, it is true that you can’t really enforce anything else apart from the Scrum rules. But there is nothing stopping you from influencing and encouraging the Development Team’s experimentation with Pair Programming in order to be more productive, especially when you are able to connect Pair Programming as a solution to productivity-impeding causes. Even if you fail, continue to listen to understand their concerns and objections. Give the Development Team time to digest and try again some time later. It’s a huge weight off your chest if you find yourself in the enviable position of having team members who are keen on trying Pair Programming. After all, this is an initiative from the Development Team. But don’t assume this practice will be welcomed into the team’s way of working. Uh Oh .. There Are Concerns From The Sponsor and Product Owner It is normal that there will be concerns or objections pertaining to the cost and impracticality of Pair Programming. Many would question why there is a need to pay two people to do the same job when one can do it “perfectly fine.” Of course, the key assumption here is the idea of “perfectly fine”. It is good to explore what this means. If it means delivering the project or product with the symptomatic smells as mentioned earlier, and accepting all the risks (technical, staffing, maintenance, knowledge) that it brings, then there really isn’t a reason for Pair Programming. This assumption is based on attempting to locally optimise the capability of each individual for the short term rather than the collective whole for the long term. To be able to see Pair Programming in a positive light, a long term view is needed. This practice reduces staffing risk on the project or product. Although people might get sick or quit, this knowledge is already well shared within the Development Team. It increases the overall technical skill as experienced developers can do hands-on mentoring for junior developers. It also speeds up onboarding when there is a need to add more people to the team (with a caveat¹). In my experience, I have never encountered a case where the delivery speed dropped because a new person joined a team that does Pair Programming. In addition, the practice of Pair Programming helps to combat the unhealthy software development smells I mentioned earlier, which in turn, benefits the project or product development overall. If you are interested in a researcher’s perspective on this topic, check out Laurie Williams and Alistair Cockburn’s The Costs and Benefits of Pair Programming. Journey to the New World Adopting the perspective of the Satir Change Model, the practice of Pair Programming is a foreign element that is injected to the team’s Old Status Quo. You can almost predict that they will go through some stage of resistance and chaos. Strong, determined and well-gelled teams will probably be able to claw their way from chaos to the New Status Quo through relentlessly integrating the practice into the team’s life. In this case, there is less facilitation support you need to provide. However, the typical situation will involve a bit more help from the Scrum Master. Do not underestimate the adjustment needed for team members to transform to the New Status Quo, where Pair Programming becomes natural to the team’s practice. From Chaos to the New Norm Different people in the team have different Change Quotients² , or the ability to adapt to changes. Each person also brings their own expectations into how Pair Programming will be practiced in the team. These expectations are often assumed and creates friction whenever they differ. You can help the team to get on the same wavelength by facilitating their working agreement when it comes to Pair Programming. Focus on what values and behaviour they agree to bring into the pairing session. Consider having a discussion around how the embodiment of the 5 values of Scrum will help. Exploring Extreme Programming’s values of Communication and Feedback can be useful as well. Look into : how disagreements should be resolved (this is inevitable when you have more than 1 brain thinking) permission to raise uncomfortable issues between a pair: Hygiene issues (e.g. sharing of keyboard/mouse, bad breath or body odour, pairing with a sick person) and sometimes, perceived attitude or attention problems On the more practical side of things, have a starting agreement on the following areas (the team can always change it after trying it out): When to pair and when not to pair (a recommended minimum would be Pair Programming is a must on writing production code, while not mandatory for other activities such as experimenting or researching) How long to pair program before taking a break (Pair Programming will be more mentally taxing, and taking a short break will help to keep the pair productive). Try the Pomodoro Technique and see if it helps. Experiment with taking a break when the pair has finished integrating the code and has kicked off the automated build. The team can have other creative approaches. How often to swap pairs (a pair that sticks together for a long time may create a new silo knowledge, which is exactly what we are trying to avoid with Pair Programming). See if the team can adapt Promiscuous Pairing. Or perhaps a logical point to swap pairs (e.g. after a story is done), or even a time-based limit approach (every day, every half a day, every 2 hours, etc., but definitely not as long as every sprint). Try different ways to determine what is helpful. In the early days of Extreme Programming, there were even experiments that involved swapping pairs every 5 minutes. Extreme indeed. Core hours of pairing: Let’s face it — coding is not the only thing developers do. They need to reply to emails, spend time in backlog refinement, update time sheets, participate in daily scrums, and attend the occasional long-lunch appointments. In some cases, people also simply need some alone time ( This is Pair Programming, not Siamese-twin Programming). Setting a core hour of pairing helps each individual to better plan for the day. This also means that the core pairing hour cannot be exactly the same as working hours. It needs to be shorter so developers can do other things. If the idea of Pair Programming for most of the day is too high a bar, try experimenting with half-day pairing. Handling of interruptions and distractions: Unless you work in some sort of programmer’s paradise, you will never have the chance to do a full day of coding. Interruptions will happen. If core hours of pairing are established, developers can better plan meetings to happen outside of core hours to minimise interruptions. It is worth mentioning that the use of mobile devices can be particularly distracting to pairing (incoming calls and messages, social media updates, etc.). I have seen how the sudden popularity of smartphones in 2008 had impacted a good team’s Pair Programming practice. Unless there are urgent calls, the recommended agreement is not to have them around during pairing. After all, a healthy pair will take frequent breaks throughout the core pairing hours and they will have a chance to check their phones. Consider skipping the formal code review: If the team has an existing practice of code review in place or uses Git Flow’s Pull Requests, consider relaxing the requirement on codes that are pair programmed. Pair Programming produces a better form of code because code is reviewed on the fly. If one is unable to trust a pair’s work on the code base, you have a bigger issue that needs to be addressed. Sickness: If a person is sick, encourage the person to stay at home (if they must be in the office, then they should not pair up). Because knowledge is already shared through pairing, the team can operate well without the sick person around. Catching bugs would not be helpful (they have enough bugs to solve on their own, thank you very much!) On navigating the implementation of Pair Programming between two persons of varying background (e.g. experience/novice pairing, introvert/extrovert pairing, etc.), I highly recommend reading Laurie William’s Pair Programming Illuminated . Walk a Mile in Their Shoes As a Scrum Master, you can join them in feeling that unfamiliar change. Offer to pair with a developer. You do not need to pair with them for a full day. Pair for 30 mins. Pair for an hour. Switch and pair with another person. Reflect on your experiences. In what ways did you feel uneasy? In what ways did your pair feel uneasy? What did you notice about the environment or the space where you were paired ? “But I don’t know programming,” you say. Not a problem. Offer to be the Rubber Duck³ for a developer that is stuck. Helping the Team to Perform Pair programming can take some time to get used to. Like months. It requires a different mindset. The team will either adopt, reject or assimilate. Teams may fall back and return to the old status quo. Encourage teams to continue ironing out the issues. Run retrospectives to find out what issues they are facing, and what changes to make to improve their experience. Consider comparing development metrics before and after Pair Programming (defects count, bus factor, estimation variation, time spent on stories, code cyclomatic complexity, automated test coverage, and velocity vs number of team member availability) to see the improvement. This serves as tangible evidence to encourage the team to move forward. What does “performing” mean in Pair Programming? It means it is the default mode they will choose when working, even when under pressure. When that happens, the team has reached fluency in this practice. To borrow from the Agile Fluency Model’s view of fluency, it is “a habit of exhibiting the proficiency at all times, even when under pressure.” When you see that team members pair program by default (even under pressure), swap pairs often, naturally ask for pairing without hesitation when they are stuck or are working on a critical piece of work, you know that the team has reached the New Status Quo. Pair Programming is now the new norm. What’s Next ? Pair Programming can be a powerful catalyst to spread knowledge and skill in the team. Engineering practices like Test Driven Development and refactoring skills can be diffused within the team during pairing. This reinvigorates the health of the project or the development of a product on hand. It is also a powerful way to bring new team members up to speed with the least amount of disruption to their work. However, just like a plant, the team needs to nurture and protect this practice continuously. One way it needs to be protected is when the team composition changes. It is important to set the expectations for new team members when working in such an arrangement. A good way to have an alignment is to add Pair Programming into the interview process. This allows the interviewer to assess the candidate’s skill and openness to collaboration, while the candidate can get a feel of how the potential new environment is going to be. It helps them to make a well-informed decision before joining the team. Conclusion When it comes to Pair Programming in a team, there is more to gain than lose. However, the benefits do not come cheap. Investments are necessary. There will be the initial cost of time when the team moves towards the new status quo. Teams and their sponsors need to look at how they operate and decide if they need these benefits. In most cases, it will be needed. It is in the stakeholders’ best interests to weigh the pros and cons, and recognise what they are giving up and the risks they are creating if they decide not to invest in this practice. This does not magically nullify Fred Brooke’s Mythical Man Month law of “adding manpower to a late software project makes it later” From Change Artistry (by Esther Derby, Gerald M. Weinberg, Johanna Rothman and Don Gray), Change Quotient “relates to how open a person is to change.” Rubber Ducking : Sometimes the brain thinks too fast for its own good. Details may be skipped and assumptions are hidden. By explaining a problem verbally to a Rubber Duck (or in this case, you), the person is forced to slow down (you can only think as fast as you speak). By verbalizing it, the thought process is made explicit and assumptions are surfaced. This helps to bring to light more understanding about the problem. And the best thing as a Rubber Duck is, you do not need to say anything ! Just lend a listening ear. What an easy way to help remove an impediment :)

A Simple Guide To Cy...

Hello everyone! I am a Test Automation Engineer trying to look at other options apart from conventional Selenium… and finally came across Cypress! This post is all about explaining my first-hand experience with Cypress, a test automation tool. Why Cypress? Let’s look at a common example to explain how it works. We will open a Google webpage in Firefox browser and check for the Google Search button. To understand Cypress Edge, we need to understand Selenium Architecture first. In a nutshell, this is the test automation process that happens with conventional Selenium. Selenium consists of two components. Bindings: Libraries for different programming languages that we use to write our tests with. WebDriver: This is a program that can manage and fully control a designated and specific browser. The important thing to note here is that these two components communicate over HTTP by exchanging JSON payload. This is well defined by WebDriver Protocol, which is a W3C Candidate Recommendation. Every command used in tests results in a JSON sent over the network. This network communication happens even if the tests are run locally. In this case, requests are sent to localhost where there is a loopback network interface. First, a specific driver for the browser is initialised (WebDriver is an interface and Firefox Driver is Class Implementing Interface). Once the corresponding WebDriver is initialised, JSON Wire Protocol is called by Implementing class. In our case, Firefox Driver and session is created to execute subsequent commands first. Subsequently, Web Element Button is created and for each each action we give in language binding, JSON Wire Protocol is called, which travels on a network over HTTP. Still Confusing? In short, for each line of Selenium code, JSON Wire Protocol is called and in turn, talks to the browser on network over HTTP methods (Get, Post, Put, Delete). Bottom Line? The Selenium architecture works through the network and this brings delay, which can sometimes be significant. Cypress addresses this issue by changing the process. Cypress has its own mechanism for manipulating DOM in the browser. Cypress runs directly in the browser, so there is no network communication involved. By running directly in the browser, Cypress has access to everything in the browser, including your application under test. Selenium vs Cypress Cypress — Automation Testing Framework Cypress is an automation testing tool built for modern web applications like React, Vue.js, Angular, etc. Cypress is a test automation tool but not based on Selenium. It is basically different from Selenium. Selenium web driver works outside the web browser but Cypress works directly inside the browser DOM elements. Initially, it was developed for developers to do unit testing. However, it was later extended to testers to do end-to-end automation testing. Cypress test uses NPM for JavaScript. If you have experience with JavaScript, it is easy to work with Cypress. Advantages: Execution speed is high. Can capture videos and take screenshots. Easy debugging. Able to visualise which test and command is running in your app. Disadvantages: It only supports Chrome browser, Firefox(Inprogress). It only supports JavaScript. It does not support any native or mobile events. Pre-Requisites: Install Nodejs. Install any IDE like Visual Studio Code Let’s Play With Your First Cypress Test: 1. Open Visual Studio Code IDE. 2. Create a Folder 3. Open a Terminal — Click View -> Terminal 4. npm init is the initialiser, which is used to set up a new or existing npm package. Type the command in the terminal as: npm init -y 5. The command below is used to install the Cypress package. Type the command in the terminal as: npm install cypress –save-dev 6. The command below is used to open the Cypress environment. Type the command in the terminal as: ./node_modules/.bin/cypress open 7. Click “OK, Got It” button in the Cypress GUI. 8. In Cypress GUI, we have default sample codes, and you can just click any one of the spec files. You will be able to view the automation test for the particular spec file. Now, enjoy the sweet feeling of successfully installing and running the Cypress test automation framework! 9. In your project folder, the structure should look like this: Let’s start with our scripts : Under the framework structure, we will store our spec file in the Integration folder. Delete the default Example folder and their scripts. Create a new script file name with the .spec.js extension (Example: test.spec.js) ├── cypress │ ├── integration │ │ ├── test.spec.js Add the code below to your spec file (test.spec.js) describe("Verify user should able to Search a keyword cypress framework in google", () => { it("Launch", function() { cy.visit('https://www.google.com/');} ); it('Enter the search keyword',function(){ cy.get('.gLFyf').type("Palo IT").should("have.value", "Palo IT");} ); it('Click on search button',function(){ cy.contains('Google Search').click();} ); it('Verify the search title', function(){c y.title().should('eq', 'Palo IT - Google Search') }) }); Cypress has adopted the Mocha’s bdd syntax like describe(), context(), it(), etc,. It’s a very useful way to keep the tests easier to read like a feature file in Cucumber. 3. (A) Enter the command below in your terminal to run the script in Chrome browser. ./node_modules/.bin/cypress open Cypress GUI should look like this, then click the link test.spec.js. Your script will run and display the results in your Chrome browser. Close the Cypress GUI. 3. (B) Enter the command below in your terminal to run the script in Electron (Headless browser). C:\Users\Jeevan\Desktop\Jeevan\Cypress_Framework> npx cypress run “Videos” are added automatically when you run in headless browser. Finally, execute your written script in Cypress. And that’s it! Thanks for reading and stay tuned for more updates about this topic!

The Weird Parts Of J...

My first experience with code was during National Service when I purchased a book on Python and wrote my first “Hello World!” file. Since then, I have progressed to pick up other programming languages such as JavaScript, Ruby and Java. The following is a summary of the quirks and odd features of JavaScript that I have came across in the last year. The typeof operator The typeof operator returns a string stating the type of the input value. typeof 'hello world!'; // 'string' typeof 123; // 'number' typeof undefined; // 'undefined' typeof true; // 'boolean' typeof { a: 1, b: 2, c: 3 }; // 'object' function sayHello() { console.log('Hello world'); } typeof sayHello; // 'function' There is no ‘array’ type in JavaScript, use Array.isArray() to check for an array typeof [1, 2, 3, 4, 5]; // 'object' Array.isArray([1, 2, 3, 4, 5]); // true Performing mathematical operations on non-numbers results in NaN (not a number) Somehow NaN (not a number) is a number? const foo = 5 / 'hello'; // NaN typeof foo; // 'number' Ever heard the saying “Everything in JavaScript is an object”? typeof null; // 'object' More on NaN Trying to evaluate NaN to be equal to anything will result in false. 51 === NaN; // false 'hello' === NaN; // false ['this', 'is', 'an', 'array'] === NaN; // false null === NaN; // false Evaluating NaN with NaN results in false too. NaN == NaN; // false NaN === NaN; // false const notANumber = 'abc' - 100; // NaN notANumber === NaN; // false We can check for NaN using the built in isNaN() function. It converts the input value to type Number before returning true for a NaN value. const notANumber = 'abc' - 100; // NaN isNaN(notANumber); // true isNaN('hello world'); // true isNaN('12345'); // false - Number('12345') returns 12345 Implicit Coercion Explicit coercion is an obvious attempt from the author to convert a value of one type to another type. const str = '12345'; typeof str; // 'string' const num1 = parseInt(str); const num2 = Number(str); typeof num1; // 'number' typeof num2; // 'number' Implicit coercion can be unclear and may be an unintended side effect. Are these strings or numbers? const str = '12345'; typeof str; // 'string' // Using the + operator const plus = +str; typeof plus; // 'number' // String representation of a Number * 1 const times = str * 1; typeof plus; // 'number' Using !! to get a Boolean value of the input, indicating if it is truthy or falsy. const zero = 0; const one = 1; !!zero; // false !!one; // true const str = 'Hi this is a string.'; const emptyStr = ''; !!str; // true !!emptyStr; // false Here are some more examples of implicit coercion that gets confusing. !![]; // true +[]; // 0 +!+[]; // 1 !+[] + !+[]; // 2 [+!+[]] + [+[]]; // '10' [][[]]; // undefined +[![]]; // NaN typeof [] + []; // 'string' typeof +[]; // 'number' typeof ![]; // 'boolean' Scope & hoisting Variables declared with var are function scoped. function someFunction() { for (var i = 0; i < 5; i++) { console.log(`Inside the loop, i is ${i}`); } console.log(`Out of the loop, i is ${i}`); } someFunction(); // 'Inside the loop, i is 0' // 'Inside the loop, i is 1' // 'Inside the loop, i is 2' // 'Inside the loop, i is 3' // 'Inside the loop, i is 4' // 'Out of the loop, i is 5' Variables declared with let are block scoped. function anotherFunction() { for (let i = 0; i < 5; i++) { console.log(`Inside the loop, i is ${i}`); } console.log(`Out of the loop, i is ${i}`); } anotherFunction(); // 'Inside the loop, i is 0' // 'Inside the loop, i is 1' // 'Inside the loop, i is 2' // 'Inside the loop, i is 3' // 'Inside the loop, i is 4' // 'ReferenceError: i is not defined' Function declarations are hoisted to the top of the file, they can be called before they are declared. On the other hand, function expressions are not hoisted, they cannot be called before they are expressed. helloDeclaration(); // 'hello function declaration' // Declaration function helloDeclaration() { console.log('hello function declaration'); } helloExpression(); // TypeError: helloExpression is not a function // Expression const helloExpression = function() { console.log('hello function expression'); }; Variable declarations are hoisted to the top of the file but the assignment of values are left in place for runtime execution. a = 2; // this line (assignment) is left to be executed during runtime, it runs later var a; // this line (variable declaration) is hoisted to the top of the file, it runs first console.log(`a is ${a}`); // 'a is 2' Comparing the above snippet with this: console.log(`a is ${a}`); // 'a is undefined' var a = 2; // this statement is broken into two parts: var a; and a = 2; // var a (variable declaration) is hoisted to the top of the file, a = 2 (value assignment) is not hoisted Strict equality vs loose equality JavaScript has == and === to check for equality, as well as != and !== to check for non-equality. === is known as strict equality, it checks for both value and type equality. On the other hand, loose equality is represented by == and it only checks for value equality. Coercion is allowed for == and JavaScript will attempt to convert the values to a common type. const num = 123; const str = '123'; num == str; // true num === str; // false 1 == true; // true 1 === true; // false 0 == false; // true 0 === false; // false Comparing arrays and objects Arrays and objects are reference types. Comparing 2 different arrays/objects using == or === returns false as they point to different arrays/objects in memory. To compare the elements of an array and the key/value pairs of an object, a deep comparison has to be done. const arr1 = [1, 2, 3, 4, 5]; const arr2 = [1, 2, 3, 4, 5]; arr1 == arr2; // false arr1 === arr2; // false const obj1 = { a: 1, b: 2, c: 3 }; const obj2 = { a: 1, b: 2, c: 3 }; obj1 == obj2; // false obj1 === obj2; // false Infinity and -infinity Number​.POSITIVE_INFINITY is a numeric value representing infinity, it can also be written as Infinity. Number.NEGATIVE_INFINITY equates to negative infinity, it can also be written as -Infinity. Infinity + Infinity; // Infinity Infinity - Infinity; // NaN Infinity * Infinity; // Infinity Infinity / Infinity; // NaN -Infinity + -Infinity; // -Infinity -Infinity - -Infinity; // NaN -Infinity * -Infinity; // Infinity -Infinity / -Infinity; // NaN The built in Math object includes helpful methods such as Math.max() and Math.min(), they return the input maximum and minimum values, respectively. Math.max(1, 20, 300, 4000, 50000); // 50000 Math.min(-1, -20, -300, -4000, -50000); // -50000 What happens if no arguments are passed into Math.max() and Math.min()? Math.max(); // -Infinity Math.min(); // Infinity It has been a quirky yet enjoyable experience over the past year and I look forward to learning more JavaScript 🙃

Retour d’expérience ...

Cela est notamment dû aux avancées relativement récentes dans le domaine du Traitement du langage naturel (Natual Language Processing ou NLP), et au fait que l’interface texte est très courante (et très accessible) sur mobile. Notre laboratoire d’innovation Marmelab a décidé d’explorer cette technologie et pour ce faire, de réaliser un projet concret nommé Tobaccobot. Il s’agit d’un coach virtuel pour arrêter de fumer en un mois, avec qui l’utilisateur communique uniquement par SMS. L’interface Le principe du bot est très simple: une personne qui a envie d’arrêter de fumer s’inscrit au programme via une page web, avec son nom et son numéro de téléphone. A partir de là, toutes les interactions se font par SMS. Le fumeur reçoit un message qui lui demande combien de cigarettes il a fumé ce jour-là. A partir de la réponse à cette question, le bot va déterminer un nombre maximum de cigarettes à ne pas dépasser pour la prochaine semaine – avec pour objectif d’aider le fumeur à arrêter totalement en 4 semaines. Chaque matin, le fumeur recevra un SMS lui demandant combien de cigarette il a fumé la veille, de manière à évaluer la progression. En fonction de sa réponse, le bot l’encouragera ou le réprimandera. Et les réponses devront varier d’un jour sur l’autre, pour ne pas lasser le fumeur. A la fin de chaque semaine, le bot fixe un nouvel objectif à atteindre – forcément plus ambitieux que la semaine précédente. A la fin de la 4ème semaine, le bot détermine si le fumeur est oui ou non parvenu à arrêter de fumer. Il envoie un message d’adieu et la conversation s’arrête là. A tout moment, le fumeur peut décider d’interrompre le programme. Note:Nous ne sommes pas tabacologues chez Marmelab – et pour tout dire, il n’y a même pas de gros fumeur chez nous. Ce cas d’utilisation a juste été choisi pour servir de support à une expérimentation technique. Si ce coach virtuel aide un jour quelqu’un à arrêter de fumer, alors nous aurons fait d’une pierre deux coups ! Le workflow de conversation Pas évident de trouver un formalisme pour modéliser une interface conversationnelle. Nous avons tenté de dessiner des boites et des flèches, et nous sommes parvenus au résultat suivant : Note:Après le début du développement, nous avons découvert un super outil pour modéliser un workflow à partir d’une description texte :code2flow. Les technologies utilisées Pour implémenter ce coach virtuel par SMS, nous avons choisi d’utiliser les technologies suivantes: Node.js pour la partie serveur, en mode Serverless avec AWS lambda DynamoDb pour le stockage de l’état du fumeur Octopush pour l’envoi et la réception de SMS nlp compromise pour le traitement en langage naturel (ou Natural Language Processing, NLP) Nous allons revenir en détail sur les raisons du choix de ces technologies et leur utilisation dans les sections suivantes. Si vous voulez voir du code, sautez à la fin de l’article pour y trouver le lien vers la source du projet, que nous publions en licence MIT. Comment ça ?! Pas de botkit ! Dans le monde Node.js, la librairie de référence pour implémenter des chatbots est Botkit. Cette librairie très populaire, bien qu’étant d’excellente qualité, ne correspond pas à notre cas d’utilisation. Tout d’abord, botkit vise surtout les plateformes de chat (Slack, Messenger, etc.) mais ne supporte pas l’envoi et la réception de SMS. Il existe bien botkit-sms, mais ce projet n’est pas très actif, et utilise Twilio. Or nous avons choisi Octopush. Il aurait donc fallu développer notre propre adaptateur. Ensuite, Botkit est prévu pour écouter sur un port l’arrivée de messages. Il s’agit d’un démon, un process node qui ne s’arrête jamais. Mais avec serverless, le service doit s’arrêter après le traitement de chaque message, et est stoppé de force s’il ne rend pas la main dans les 5 secondes. Il aurait donc fallu forcer botkit à quitter après chaque message en killant le process node – pas très propre. Enfin, puisqu’il est prévu pour s’exécuter sous forme de tâche de fond, botkit persiste le contexte des conversations en mémoire. Ce contexte est vidé à chaque redémarrage. Donc nativement, il n’est pas facile de conserver un contexte de conversation en mode serverless. Il est bien entendu possible de fournir à botkit un stockage de conversation personnalisé (pour sauvegarder vers dynamodDb dans notre cas). Mais botkit impose trois tables: users, channels et teams, dont au moins deux n’ont aucun sens dans notre cas (channels et teams). Il aurait fallu tout de même les implémenter ou du moins les mocker. Au vu de toutes ces limitations, nous avons décidé que botkit n’était pas approprié pour notre application. AWS Lambda Vous connaissez peut-être le principe d’AWS lambda: c’est un hébergement PaaS (platform-as-a-service) semblable à heroku, où on ne déploie que… des fonctions. Dans ce contexte, une application est un ensemble de fonctions qui sont appelées en réponse à des événements (par exemple requête HTTP ou cron). Et c’est API Gateway, autre service d’Amazon, qui se charge de router les appels à une API HTTP vers une fonction lambda pour en calculer la réponse. Cela permet de n’exécuter le code que quand il est nécessaire, et de se passer d’un serveur web. L’activité de notre bot est très ponctuelle : il relance l’utilisateur une fois par jour, et n’attend qu’une réponse par jour. Faire tourner un serveur pour rien 99% du temps serait du gâchis dans ce cas; l’approche AWS lambda est toute indiquée. Sous le capot, AWS utilise Docker pour stocker les fonctions lambda. Il réveille un conteneur lorsqu’une lambda est sollicitée, et le rendort après quelques minutes d’inactivité. Mais tout cela se fait automatiquement, et le développeur ne voit, lui, que des fonctions. Donc hormis le “serveur” d’API Gateway, qui est en fait juste un reverse proxy géant mutualisé, AWS facture uniquement l’hébergement lambda à l’appel de fonction, c’est-à-dire à la requête. Et c’est extrêmement bon marché (le premier million de requêtes est gratuit). Serverless Serverless est une librairie JS open-source qui permet d’utiliser AWS lambda facilement, en automatisant la configuration et le déploiement sur AWS. Cette librairie prend en charge non seulement AWS lambda bien sûr, mais aussi API Gateway pour les événements HTTP, ainsi que cron et Dynamodb pour la base de donnée dans notre cas. Serverless utilise un fichier de configuration serverless.yml, dans lequel on déclare les lambdas (functions) et les resources (resources) utilisées par les lambdas. Voici pour exemple celui de tobaccobot : service: tobaccobot functions: botConversation: handler:src/serverless/index.botConversation # la fonction exportée avec le nom botConversation dans le fichier index.js events: # ce qui déclenche l'appel de cette fonction - http: # la partie HTTP sert à configurer API Gateway method: POST integration: lambda path: bot_conversation # le path dans l'url cors: true # L'API HTTP accepte les appels de n'importe quel domaine (CORS) getBotConversation: handler: src/serverless/index.botConversation events: - http: method: GET # la même fonction doit répondre en POST et en GET, contrainte d'octopush (voir plus loin) integration: lambda path: bot_conversation cors: true dailyMessage: handler: src/serverless/index.dailyMessage events: - schedule: # ici ce n'est pas une requête HTTP qui déclenche l'appel mais un cron rate: cron(0 8 ? * * *) enabled: true setupTables: handler: src/serverless/index.setupTables # pas d'events, on ne peut donc l'appeler qu'avec l'API AWS subscribe: handler: src/serverless/index.subscribe events: - http: method: POST integration: lambda path: subscribe cors: true reportData: handler: src/serverless/index.reportData events: - http: method: POST integration: lambda path: report_data cors: true resources: Resources: # une table dynamodb pour stocker les infos des fumeurs DynamoDbSmokerTable: # Les noms de ressources doivent être uniques Type: AWS::DynamoDB::Table Properties: TableName: smoker AttributeDefinitions: - AttributeName: phone AttributeType: S # string KeySchema: - AttributeName: phone KeyType: HASH ProvisionedThroughput: ReadCapacityUnits: 5 WriteCapacityUnits: 5 # une policy IAM pour permettre aux lambdas d'accéder à cette table dynamodb DynamoDBSmokerIamPolicy: # Y compris les noms des policies Type: AWS::IAM::Policy DependsOn: DynamoDbSmokerTable Properties: PolicyName: lambda-dynamodb-smoker # Ce nom doit également être unique PolicyDocument: Version: '2012-10-17' Statement: - Effect: Allow Action: - dynamodb:DescribeTable - dynamodb:GetItem - dynamodb:PutItem - dynamodb:UpdateItem - dynamodb:DeleteItem - dynamodb:Scan Resource: arn:aws:dynamodb:*:*:table/smoker Roles: - Ref: IamRoleLambdaExecution # une autre table dynamodb pour stocker les infos des fumeurs qui sont arrivés au bout du programme DynamoDbArchiveTable: Type: AWS::DynamoDB::Table Properties: TableName: archive AttributeDefinitions: - AttributeName: id AttributeType: S KeySchema: - AttributeName: id KeyType: HASH ProvisionedThroughput: ReadCapacityUnits: 5 WriteCapacityUnits: 5 # comme la précédente, il faut une policy pour la rendre accessible DynamoDBArchiveIamPolicy: Type: AWS::IAM::Policy DependsOn: DynamoDbArchiveTable Properties: PolicyName: lambda-dynamodb-archive PolicyDocument: Version: '2012-10-17' Statement: - Effect: Allow Action: - dynamodb:DescribeTable - dynamodb:GetItem - dynamodb:PutItem - dynamodb:UpdateItem - dynamodb:DeleteItem - dynamodb:Scan Resource: arn:aws:dynamodb:*:*:table/archive Roles: - Ref: IamRoleLambdaExecution provider: name: aws runtime: nodejs4.3 stage: dev region: eu-west-1 cfLogs: true plugins: - serverless-webpack - serverless-offline custom: webpack: ./webpack.config.serverless.js # notre conf webpack serverless-offline: # la conf pour l'exécution en local babelOptions: presets: ["es2015-node4", "es2016"] plugins: ["add-module-exports", "transform-runtime"] Serverless fournit sa propre version du package aws-sdk, déjà configuré avec les bons accès. Et cela inclut les accès IAM. Serverless : les pièges Le hic principal avec serverless, c’est l’environnement de développement. Un développeur n’a pas de service AWS qui tourne sur son poste de travail. Comment tester les fonctions lambda dans ce contexte ? Le plugin serverless-webpack permet de servir les lambdas en local, mais il ne suit pas la spécification API gateway. Heureusement, il existe le plugin serverless-offline, qui émule AWS lambda et API gateway. Il accepte aussi une configuration babel. C’est un must have ! Serverless a eu une mise à jour majeure entre les versions 0.5 et 1.0, et l’on trouve encore beaucoup de documentation concernant la version précédente. Ne soyez pas étonné que le copier/coller depuis Stack Overflow ne donne rien, et lisez la doc officielle. Les logs des lambdas sont consultables grâce à la commande serverless logs -f [lambdaName]. Peu importe le nombre de conteners utilisés par AWS: tous les logs d’une lambda sont rassemblés chronologiquement. Serverless consigne automatiquement le résultat des console.error() et console.info(), mais il ignore les console.log(). API Gateway ne peut retourner que du JSON. Il est donc impossible d’utiliser une lambda pour servir du HTML, ou une image qui serait générée. Node.js Pour ce qui est du code en lui-même, AWS lambda utilise Node 4.3.2. Serverless compresse le code de la fonction lambda dans un fichier zip. Les packages node ne sont pas inclus, et AWS lambda n’accepte pas d’en installer de son côté. Pour utiliser des packages externes, il faut donc concaténer notre code et celui de ses dépendances dans une seule fonction – c’est le travail d’un module bundler. Nous avons choisi Webpack, que nous utilisons couramment pour le développement frontend. Serverless fournit également un plugin webpack pour automatiser la construction des fichiers à déployer: plugins: - serverless-webpack custom: webpack: chemin/vers/webpack.config.js D’ailleurs, quitte à utiliser webpack, autant ajouter babel également, histoire de profiter des dernières nouveautés d’ES6. Rien de nouveau de ce côté là. Un inconvénient de webpack est que certaines librairies que l’on a l’habitude d’utiliser côté serveur ne fonctionnent plus. C’est par exemple le cas de config, qui lit les fichiers de config au moment de l’exécution. Ce problème est nuancé par la disponibilité d’un plugin permettant de reproduire le mécanisme de manière transparente: webpack-config-plugin. DynamoDb DynamoDb est une base de donnée clef/valeur relativement simple, semblable à Redis. Elle permet de définir un table avec une clef de partition qui sert d’identifiant unique pour l’objet. Si on veut, on peut ajouter une clef de tri, mais dans ce cas la clef de partition n’est plus unique et c’est la clef de tri qui fait la différence. Dans notre cas nous avons choisi une seule clef de partition: le numéro de téléphone de l’utilisateur. Mis à part les clefs, un document dynamoDb n’a aucune validation et accepte tout format. Aws-sdk fournit l’objet dynamoDb pour interroger le service dynamoDb. Il propose également une interface web très facile d’utilisation. Dynamo DB : les pièges DynamoDb retourne des objets avec une structure un peu particulière, qui précise le type de chaque champ: { name: { S: 'john' // une clef est ajouté pour préciser le type de donnée de l'attribut, ici une string } } Il est fastidieux de convertir ce format en simple format JSON et inversement. Heureusement, il existe la librairie dynamodb-oop qui réalise cette transformation et offre une api légèrement plus agréable. Il faut néanmoins faire attention à 2 points : Une opération getItem retourne un objet vide ({}) et non null lorsque l’objet n’a pas été trouvé. Les opérations createTable et deleteTable, bien qu’acceptant un callback, retournent lorsque l’opération à été initialisée et non pas terminée. Pour être sûr que ce type d’opération est achevée, il faut utiliser dynamloDb.waitFor qui permet d’attendre un événement, en l’occurrence tableExists et tableNotExists. Par exemple pour createTable : function createTable(params) { return new Promise((resolve, reject) => { dynamoDB.on('error', (operation, error) => reject(error)); dynamoDB.client.createTable(params, (err) => { if (err) { reject(err); return; } dynamoDB.client.waitFor('tableExists', params, (errTableExists, result) => { if (errTableExists) return reject(errTableExists); return resolve(result); }); }); }); } A noter que côté AWS, serverless gère la création de la table automatiquement. Pour émuler le stockage sur dynamo DB en local, il existe un module dynamodb-local. Il n’offre par contre pas d’interface web pour consulter et éditer le contenu dynamodb aisément. dynamodb-local ne propose qu’une console beaucoup trop limitée, puisqu’elle demande de coder les opérations à réaliser en javascript en utilisant aws-sdk. Cette console est accessible sur le port 8000. Octopush Pour l’envoi des SMS nous avons choisi Octopush qui est le moins cher, malgré une api orientée campagne de publicité. Pour utiliser Octopush il existe un module node: octopush. L’utilisation est très simple: // On crée une instance de SMS avec nos credentials const sms = new octopush.SMS(config.octopush.user_login, config.octopush.api_key); // On appelle ensuite un certain nombre de fonctions de configuration, par exemple: sms.set_sms_text(message); sms.set_sms_recipients([phone]); // Attention, il faut passer un tableau sms.set_sms_request_id(sms.uniqid()); // Il est possible de spécifier un identifiant que l'on génère de notre côté ... // L'envoi du sms sms.send((error, result) => { ... }); Il est à noter qu’Octopush supporte le publipostage comme le suggère le fait que set_sms_recipients accepte un tableau de numéros de téléphones. Il est alors possible de remplacer des variables dans le texte. Malheureusement, elles ne sont qu’au nombre de 5: {ch1}, les valeurs sont spécifiées en appelant sms.set_sms_fields_1([…]) {ch2}, les valeurs sont spécifiées en appelant sms.set_sms_fields_2([…]) {ch3}, les valeurs sont spécifiées en appelant sms.set_sms_fields_3([…]) {prenom}, les valeurs sont spécifiées en appelant sms.set_recipients_first_names([…]) {nom}, les valeurs sont spécifiées en appelant sms.set_recipients_last_names([…]) Octopush : les pièges Pour gérer les réponses des utilisateurs, il faut fournir une URL qu’Octopush appellera avec la réponse. Pour se conformer aux spécifications d’Octopush, cette URL doit répondre immédiatement sans retourner de contenu. Le traitement par l’application cliente doit donc s’effectuer de manière asynchrone après avoir répondu à Octopush. Octopush demande également que cette url soit interrogeable en GET pour pouvoir la tester depuis un navigateur…. La vérification de cette url n’est pas automatisée pour l’instant, et peut leur prendre jusqu’à une journée… Octopush ne récupère que les SMS en réponse à un message attendant une réponse (option_with_replies). Cela signifie que si l’utilisateur envoie plusieurs messages successifs, seul le premier sera pris en compte. Nous avons eu besoin d’une quatrième variable pour l’un de nos messages et avons simplement utilisé la variable prenom dans ce cas. Au moment de l’écriture de cet article, la documentation d’octopush précise à tort que set_recipients_first_names remplacera les chaines {nom} et que set_recipients_last_names remplacera les chaines {prenom}. Tobaccobot en détail La logique de conversation Le workflow de conversation montre que ce bot est en fait une machine à état tout-à-fait classique. Une action (une requête HTTP, un cron) fait passer l’objet smoker d’un état à un autre en fonction de certaines règles. Il existe de nombreuses librairies pour implémenter une machine à état, mais vu la simplicité de la logique de tobaccobot, nul besoin d’aller chercher plus loin que quelques if imbriqués dans une fonction. La signature de cette fonction est (etat, action) => état. Si vous pratiquez la programmation fonctionnelle ou React, vous reconnaissez sans doute ce pattern: c’est celui d’un reducer. Et une librairie a fait beaucoup parler d’elle pour une implémentation de ce pattern à destination de React: c’est redux. Utilisant cette librairie de façon intensive sur des projets frontend, nous avons naturellement commencé par elle pour implémenter la logique de conversation. Mais en définitive, redux n’apporte rien de plus que la fonction reduce() native dans notre cas, et nous avons fini par supprimer cette dépendance. Voici par exemple un extrait du code qui, à partir de l’état du smoker déduit d’un nombre de cigarettes consommées, déduit le message à envoyer: export default (evaluation) => { if (evaluation.backFromBad === 1) { return backFromBad(); } if (evaluation.backFromBad === 2) { return backFromReallyBad(evaluation.targetConsumption); } if (evaluation.backFromBad > 2) { return backFromBadCombo(); } const lastDelta = evaluation.delta.slice(-1)[0]; const previousDelta = evaluation.delta.slice(-2)[0]; if (lastDelta <= -3) { if (evaluation.delta.length >= 2 && previousDelta <= -3) { return continuedGreatProgress(lastDelta); } return greatProgress(lastDelta); } if (evaluation.state === 'bad') { if (evaluation.combo.hit === 2) { return reallyBad(reallyBadLinks[(evaluation.combo.repeatition - 1) % 3]); } if (evaluation.combo.hit > 2) { return badCombo( evaluation.combo.hit, evaluation.targetConsumption, badComboLinks[(evaluation.combo.repeatition - 1) % 3] ); } return bad(evaluation.targetConsumption); } if (evaluation.combo.hit === 2) { return reallyGood(); } if (evaluation.combo.hit > 2) { return goodCombo(evaluation.combo.hit); } return good(); }; Pour le contenu des messagesbackFromBad(),backFromReallyBad()et les autres, jetez un oeil àla source. Les side effects Dans notre machine a état, les actions ont deux effets: changer l’état du smoker, et un ensemble d’opérations qui ne sont pas répercutées dans l’état du smoker (stockage dans dynamodb, envoi de SMS, logs). Cet ensemble d’opérations n’est pas modélisable par une fonction pure (au sens de la programmation fonctionnelle), on les appelle des side effects. Très souvent, ces side effects sont des opérations asynchrones. Pour gérer ces opérations asynchrones, plutôt que d’utiliser les callbacks, nous avons utilisé les générateurs. Et nous nous sommes aidés de sg, une petite librairie créé par Marmelab. sg gère l’ordonnancement des tâches asynchrones avec des générateurs (comme le fait co.js), mais au lieu de retourner des promesses directement, sg retourne des effets décrivant quoi faire (comme le fait redux-saga). Les générateurs permettent de décrire le flux des actions asynchrones de manière synchrone et, avec les effets, on peut testerl’ordonnancement des opérations sans avoir à ce soucier de leurs implémentations. L’effet le plus couramment utilisé est call. Il s’agit simplement de l’appel d’une fonction asynchrone. Par exemple, avec le générateur suivant: export default function* dailyMessageSaga(smokers) { const dailySmokers = yield call(getDailySmokers, smokers); const { asked = [], dubious = [], qualified = [] } = yield call(sortSmokersByState, dailySmokers); yield call(notifyDubious, dubious); // Users with asked state haven't answered the previous day, we send them a message for the current day anyway yield call(notifyQualified, [...asked, ...qualified]); } Il est possible d’écrire les tests de cette façon: describe('dailyMessageSaga', () => { let iterator; before(() => { iterator = dailyMessageSaga('users'); }); it('should call getDailySmokers with users passed to the saga', () => { const { value } = iterator.next(); expect(value).toEqual(call(getDailySmokers, 'users')); }); it('should call sortSmokersByState with users returned by getDailySmokers', () => { const { value } = iterator.next('dailySmokers'); expect(value).toEqual(call(sortSmokersByState, 'dailySmokers')); }); it('should call notifyQualified with qualified and asked key then notifyDubious with dubious key', () => { let { value } = iterator.next({ asked: ['asked'], qualified: ['qualified'], dubious: 'dubious' }); expect(value).toEqual(call(notifyDubious, 'dubious')); value = iterator.next().value; expect(value).toEqual(call(notifyQualified, ['asked', 'qualified'])); }); }); Découpage Passons maintenant à l’implémentation de notre bot. Il est composé de 3 lambdas : subscribe répond au post du formulaire ; il crée un utilisateur et envoie le premier SMS dailyMessage est exécuté par un cron qui envoie le message journalier à chaque utilisateur, le message étant basé sur l’état de l’utilisateur botConversation est appelé par Octopush et traite les réponses de l’utilisateur Passons rapidement sur la lambda subscribe, qui est activée par une route POST appelée par un simple formulaire statique hébergé sur s3. subscribe: handler: src/serverless/index.subscribe events: - http: method: POST integration: lambda path: subscribe cors: true Traitement des messages entrants La lambdabotConversationest appelé par Octopush via une route POST: dailyMessage: handler: src/serverless/index.dailyMessage events: - schedule: rate: cron(0 8 ? * * *) enabled: false La syntaxe cron pour AWS prend 6 paramètres: minutes, heures, jour du mois, mois, jour de la semaine, et année. On ne peut pas activer simultanément le jour du mois et le jour de la semaine ; pour ignorer l’un des deux on utilise le caractère ?. La lambda dailyMessage récupère tous les utilisateurs avec la commande scan de DynamoDB. scan accepte en paramètres batchSize et exclusiveStartKey, qui permettent de réaliser la commande en batch. batchSize spécifie le nombre de résultats à retourner, et exclusiveStartKey précise la clef à partir de laquelle reprendre la requête. Le résultat de scan inclut la dernière clef retournée. Pour exécuter les traitements en série, nous utilisons une récursion sur le générateur. function* dailyMessage() { /// ... first batch yield* dailyMessage(lastKey); } Ensuite, chaque utilisateur est trié suivant son état dubious/qualified, et le nombre de jours restants. Les utilisateurs dubious sont les utilisateurs qui se sont inscris, mais n’ont jamais ou mal répondu à la première question. dailyMessage va alors les relancer. Enfin, les utilisateur vont être triés selon le nombre de jours qu’il leur reste: S’ils sont à la fin du programme: si leur consommation est descendue à 0 cigarettes sur les 3 derniers jours, nous les félicitons. Sinon, nous les invitons à recommencer. S’ils sont à la fin d’une semaine: nous spécifions un nouvel objectif. Dans tout les autres cas: On décrémente le nombre de jour restant et on demande à l’utilisateur combien de cigarettes il a fumé hier. L’implémentation de cette machine a été assez simple se compose de quelques if impriqués – rien de très notable, à part l’apport bénéfique de sg qui simplifie les side effects. Traitement du langage naturel De plus en plus de librairies rendant le traitement en langage naturel (ou NLP pour Natural Language Processing) accessible apparaissent, et notemment en node.js: nlp_compromise natural Le NLP est un sujet coeur pour les bots quand il s’agit de traiter les questions. De notre côté, nous n’avions qu’à traiter des réponses, et dans un cadre très restreint. nlp nous a simplement permis de récupérer le nombre de cigarettes dans les messages envoyés par l’utilisateur. Que celui nous réponde at least 15 cigarettes, no more than fifteen cigarettes or 15, nlp nous retourne 15. Conclusion Le projet tobaccobot, a été l’occasion de nous familiariser avec plusieurs de technologies: serverless, aws lambda, aws dynamoDb, octopush. Serverless est un outil puissant, mais mettre en place le bon environnement de développement a demandé beaucoup d’expérimentation pour trouver la bonne configuration. De plus, nous avons passé beaucoup de temps à nous documenter et à configurer l’environnement serverless comparé à un serveur traditionnel. Cela dit, ce travail ayant été réalisé, la mise en place sera bien plus rapide à l’avenir. Une fois la partie serverless mise en place, le bot en lui même s’est révélé simple à implémenter, puisqu’il s’agit de prendre un événement (sms ou cron) et un état en entrée et de mettre à jour l’état et générer un message en sortie. La modélisation de la conversation est donc la partie la plus difficile. Il aurait été intéressant d’avoir à gérer une interaction avec un groupe d’utilisateur, ou une interaction plus variée. Tout bien considéré, cela reste une bonne introduction à la réalisation d’un bot. Le code de notre tobaccobot est disponible sur github: https://github.com/marmelab/tobaccobot Retrouvez l’article original en cliquant ici !

Visually test your a...

The quickie of Alexandre Delattre (Viseo) on Marble testing with Rx (JS/Java/…) during the DevFest Toulouse 2017 was particularly interesting. What is Rx? Rx is a library for composing asynchronous and event-based programs by using observable sequences. It provides one core type, the Observable, satellite types (Observer, Schedulers, Subjects) and operators inspired by Array#extras (map, filter, reduce, every, etc) to allow handling asynchronous events as collections. - From RxJS doc We can use Rx in the frontend (for service calls combinations and reactive user interface) as well as in the backend (micro-services calls combinations, websockets, …). Problematic The current trend is to transform imperative programming into reactive functional programming. With the tools at our disposal, testing asynchronous behaviours is hard, and often, developers just skip this important step. But it is possible! And now, simpler than ever. So how to do that? How to check that our streams unfold the way we want them to? You guessed right: with Marble Testing. Marble diagrams In order to representObservables, we define Marble diagrams. They are drawn as a horizontal timeline, with events occurring as visual nodes. We can represent them like this example of a themergefunction that takes two observables and return a merge of the two. You can refer to RxMarbles website in order to find interactive diagrams of Rx Observables. In order to use them in code, we define an ASCII notation. First, we define the time frame (default is 10ms). Then we can have a look at the different symbols that we need: - : Nothing happens during one frame | : the observable is completed (onComplete) # : observable error (onError) x : the observable emits a value (onNext) ^ : subscription point of an Observable (only for hot Observables) () : value grouping Example of a mobile weather application For this example of application, the speaker chose the language Kotlin, but we could do the same with any Rx supported language and platform (see full list onReactiveX site). Application Requirements We have an “instant search” application, with the user inputting their city’s name. After a 500ms delay, we launch the search, and a loading progress is visible to the user during the search. Then the result is displayed, or an error, if need be. Interfaces Our available interfaces are the following: interface WeatherViewModel { // Inputs val city: Subject // Outputs val state: Observable<State> val weather: Observable<WeatherData> } sealed class State object Idle : State() object Loading : State() data class Error(val e:Throwable) : State() data class WeatherData ( val city: String, val pictoUrl: String, val minTemperature: Float, val maxTemperature: Float ) interface WeatherService { fun getWeather(city: String): Single<WeatherData> } Implementation city = BehaviorSubject.createDefault("") state = BehaviorSubject.createDefault(Idle) weather = city .filter { it.isNotEmpty() } .debounce(500, TimeUnit.MILLISECONDS, mainScheduler) .switchMap { weatherService.getWeather(it) .observeOn(mainScheduler) .doOnSubscribe { state.onNext(Loading) } .doOnSuccess { state.onNext(Idle) } .doOnError { state.onNext(Error(it)) } .toObservable() .onErrorResumeNext(Observable.empty()) } Use case diagram For example, in this diagram, the user starts typing “Toulouse”, and after 500ms without activity (no keystroke pressed), we call the webservice to get the weather in Toulouse. The webservice then returns the response (sunny weather). Afterwards, the user wants to check the weather in Paris, so after the delay, the webservice is called, and then we get the response. Marble testing implementation @Before fun setup() { weatherService = Mockito.mock(WeatherService::class.java) scheduler = MarbleScheduler(100) viewModel = WeatherViewModelImpl(weatherService, scheduler) } Following are the values that we need in order to test. We map the symbol “0” to the event “empty string”, the symbol “1” to the event the user inputs “tou”, the symbol “t” to the event the user inputs “toulouse”, etc. val cityValues = mapOf( "0" to "", "1" to "tou", "t" to "toulouse", "b" to "bordeaux" ) val stateValues = mapOf( "i" to Idle, "l" to Loading, "e" to Error(weatherError) ) val weatherValues = mapOf( "t" to weatherData, "b" to bordeauxData ) And these are the data that the webservice is mocked to respond. val weatherData = WeatherData("toulouse", "sunny", 20f, 30f) val bordeauxData = WeatherData("bordeaux", "cloudy", 10f, 15f) So now, the test looks like this. @Test fun test2Cities() { val s = scheduler val cityInput = s.hot( "0-1-t------------b----------", cityValues) // debouncing -----t -----b `when`(weatherService.getWeather("toulouse")) .thenReturn(s.single( "--t", weatherValues)) `when`(weatherService.getWeather("bordeaux")) .thenReturn(s.single( "--b", weatherValues)) s.expectObservable(viewModel.weather).toBe( "-----------t------------b---", weatherValues) s.expectObservable(viewModel.state).toBe( "i--------l-i----------l-i---", stateValues) cityInput.subscribe(viewModel.city) s.flush() } We obtain an ASCII visual representation of what we simulate the user interaction to be, and then, we tell the test what chain of events we expect to receive from the various observables. In this representation, we can visually check how the different timelines correspond, and easily test that the more complex chains of events actually lead to the observable that we want. Conclusion Pros Tests are more concise and expressive Complex cases can be tested visually Now testing the global coherence and behaviour is made possible. Cons The API suffers from differences between the different platform. Alignment of marbles can be visually challenging in ASCII. Possible improvements in the future The speaker concluded by proposing improvements in the future in order to counter the cons: Uniformisation of the APIs. Development of a graphical editor for marbles. He added that if someone in the conference wanted to get involved and develop a graphical editor, it would be great and useful.

Create a cross platf...

First experience with react-native and react-native-web To create a native app with code sharing today there are 2 main approaches: Hybrid app: write in JavaScript, HTML and CSS, and the entire code is embedded and run in a web view in mobile. Like Phonegap. JavaScript engine + native UI: write in JavaScript. Ui components are translated into native UI components. Other codes are run in a JavaScript engine provided by the mobile system. React Native is a framework represented by the second philosophy. It lets you create a mobile app using JavaScript. As a web app developer with not so much mobile background, it could be a good way to start a mobile app. React Native is based on React, same design as React, so he should have a good integration with other react lib. It is based on version 0.56.RC now, not yet a major version. But looking at who’s actually using React Native: Facebook, YouTube, Skype etc, we could have confidence in it. React Native is a Facebook project. To make real code sharing, we expect to have at the same time a web app without re-writing the UI part. That comes with the “react-native-web” framework, who brings the Components and APIs of React Native to web. As mentioned in React Native Web home pages, it is used in production by Twitter Lite. It is also supported by react-scripts. So let we start an experience of a cross device application with this two framework. I want to do something further than a hello world example. But let me start with initializing the project. Initialize a project There are two ways to initiate a React Native project as explained here. Create React Native App A quick way to create and start a mobile app if you have a device on which you want to run (otherwise you will need to install an emulator). npm install -g create-react-native-app create-react-native-app AwesomeProject It will be hosted by “expo” configuration so you can quickly run your native app within Expo client app. Scripts run will deploy mobile app within an expo container. React-native-cli In this case you will need a full mobile development environment which means XCode for iOS and Android Studio and Android SDK for Android etc to start with. npm install -g react-native-cli react-native init MyNote The script creates two folders additional to “Android” and “iOS” and initiate a default setting for native app without “expo”. This should be the best way for initiating a standard project. To launch a simulator, take iOS for example, you can run react-native run-ios Or you can also open .xcodeproj in XCode and run the project. You can also do this job later in your react-native project with react-native upgrade Either the way of initialization, we now have a runnable native app project. So far so good. Everything goes well. Configure the native project as a web app React Native translates its UI components to native platform components for iOS and Android, and React-native-web will do the job for a web platform. Check its > Github page < We will need to add a few things to make web app available : On react-dom, react-native-web, babel-plugin-react-native-web> In the entrypoint index.web.js, instead of the classic react way to render you application with the DOM, we will do this in the React Native way, using AppRegistry. So my entry point is something like this. import App from './App'; import React from 'react'; import { AppRegistry } from 'react-native'; AppRegistry.registerComponent('MyNote', () => App); AppRegistry.runApplication('MyNote', { initialProps: {}, rootTag: document.getElementById('react-native-app') }); The thing is, react-script can launch a react-native project and automatically do the magical alias to react-native-web, but the embedded webpack config required a specific folder structure that does not so much fit the structure created by react-native. So I create my  webpack.config.js and run a webpack dev server. In the webpack config, we need a babel loader for everything expected to be compiled by Babel. And plug our “babel-plugin-react-native-web” here to take care of the aliases ‘react-native’ to ‘react-native-web’. Or you can also do this in you module export resolve. And don‘t forget to set your entry index.web.js. After all these, my project and my package.json look like this And I can now run my native app with xcode and on the other side my web app with script npm run web. When the code changes, a simple Cmd+R in simulator or in browser will reload app. A little bit settings for the web part, it’s a pity that the web app initialization is not included by react-native init step. And now we are ready for our develop environment. Developments : UI component and API The development is very similar to classic react. Just using React Native component instead of DOM component. The basic components of React Native are quite simple. View, Text, Image, TextInput, TouchableHighlight etc. You can easily associate a DOM interface using div, img, input, a with them. Most apps will end up using just these basic components. Component style is defined by ‘style’ prop. If you are familiar with CSS, the style name and value match usually how it works on web. The events and handlers are quite similar to DOM as well. For example a TextInput component has onChange, onKeyPress, onFocus and onBlur props. For a web developer, you should be able to make it out quite well for this part. More advanced native components are also available in react-native. Most common components are well supported in react-native-web. The latest version of react-native-web adds implement for SectionList. Still there are platform specific components. DatePicker is one of them. We can regret that iOS and Android could not reach an agreement with DatePicker interface. React native provides a Platform API to make platform specific codes. For a DatePicker for example, we could have something like this :   const DatePicker = Platform.select({ ios: <DatePickerIOS />, android: <DatePickerAndroid />, web: <input type='date' /> }) Many third party libraries exist today to unify the two mobiles platform codes (react-native-datepicker for example), but few of them includes web support. Responsive React-native component use FlexBox layout. FlexBox is a helpful tool to create responsive app. On the web side it will be translated into css flexbox properties, that means old browsers will not be supported, flexDirection, alignItems, justifyContent, alignSelf, flex properties are available in react-native, and work in the same way as in CSS. Dimension is another helpful API. Dimension.get can give the current height and width. You can create a dynamic rendering and styling logic depending on it. The calculation should be done at every render to guarantee the dimensions up to date with any changing due to device rotation or browser resize. Dimension API provides a change event listener. Platform API is also a choice to build rendering logic. In that case we usually would like to differentiate between a small mobile screen and a large browser window on laptop. Well actually Platform.OS and Platform.select has 3 possible values “iOS”, “Android” and “Web”. I don’t think it can distinguish an iPhone from a iPad, so your mobile screen layout may not be suitable for a tablet. Navigation Navigation is a hard part to make code sharing successful. Each platform has its own way to manage navigation and history. Unfortunately it is also one of the essential part of app. React-native does not provide a “official” API for navigation. It recommends some available navigation components. I’ve tried React Navigation which support both the mobile and web platform. Although, after trying several combination of react native and react navigation, I fixed in version 0.54.0 of react-native and 1.5.8 of react-navigation. Cause after react-navigation 2.0, web support is broken. And I had several problems to work react-navigation 1.5.8 with other versions of react-native. Live this instability in JS world. Well the fix for web is in V2 roadmap. React Navigation provides basic features of navigation like navigate, history, routing. Advanced features that could be interesting in React Navigation : Sub routing, multi-routing Deep link in 3 platform Customizable navigator Provides customizable UI for navigation like header and tab. Even though deep link is supported, I didn’t find any option to change URL when path changed in web platform. That needs to be implemented manually. Other classic features which do not need UI action works well in mobile device as in web browser, like async call, await, integration with Redux etc, as the code runs in a Javascript environment. If you use a JS library that does not reference DOM API, you should not have any surprise. Conclusion React native, with the help of React-native-web propose a quite simple way to create a cross device application. It include the essential requirement of an application with a possibility to customize. It comes with a rich ecosystem around React. It does not require a great mobile background to start and deploy a mobile app. It makes real code sharing between web and mobiles in 90% case. UI development is very similar to HTML development. Ecosystem of react-native is very dynamic. Compare with hybrid app, native component and API are used in react-native app (for mobile platforms). There are still some drawbacks To integrate web app with react-native, one will need some manual work. Even though, react-native + react-native-web is still a good choice to make a cross device app with a real code sharing and gains significantly in productivity.

Dissecting Webpack: ...

In the previous post, we had discussed the setting up of Webpack configuration file and kick-starting it with Webpack development server. But what makes Webpack a one-stop bundler are the loaders and plugins. Loaders help Webpack to transform code or help in the development aspect whereas plugins come in at the end when the bundling is happening to enhance or optimise the performance of the application. The following figure shows some recommended loaders and plugins. Credit: Roohiya Dudukela As some of you may have noticed in the configuration chart we have above, there are 2 attributes that have not been mentioned yet: module: {rules: []}, plugins: [] The loaders configuration goes into modules.rules array while plugins configuration goes into…plugins array. So that was easy. Let’s dive right into, firstly, loaders. Loaders Babel Loader npm install babel-loader --save-dev This loader transpiles >ES5 code to ES5. The configuration is as follows: This informs Webpack to use babel-loader only on jsx files and exclude looking into node modules. Along with the loader, there are a bunch of dependencies to be installed: npm install babel-core --save-dev One may ask why we need a babel-loader when we already have a babel-polyfill. To understand this, we need to see what is the function of each tool. This video has a good explanation on the difference. In short, babel-loader takes care of transforming syntax that is above ES5, and the babel-polyfill is called to create new functions and methods, that browsers don’t support, on the fly. Both complement each other and are needed to handle different parts of the modern JS. For example, ‘const’ is transpiled to ‘var’ & and arrow function will be transpiled to an anonymous function. const myArray = new BetterArray( [1, 2, 3] ) → var myArray = new BetterArray( [1, 2, 3] ) var nums = list.map( ( v, i ) => v + i ) → var nums = list.map( function(v, i) { return v + i } ); CSS Loader & Style Loader Since Webpack only understands Javascript, we need to add loaders to tell it how to handle CSS files. npm install css-loader --save-dev CSS loader will look into all the css imports and urls and return a string (as shown below) that will be part of the main js. ... \"body {\\n background-color: pink;\\n\"... Since it is the part of the JS file, the browser does not have the capability to recognise and extract the css code from the js, so the styles will not be applied. npm install style-loader --save-dev What we need is a style loader that extracts the css string out of the js bundle and inject it into styles tag of the html file. With the above configuration, which specifies to use style and css loaders (in that order shown above) on all files, excluding ones in node modules, with a .css extension, we have the styles applied and CSS taken care of. File Loader & URL Loader Another set of assets that we need to explicitly tell Webpack how to handle is images and fonts. How we usually manage images is either to inject them inline in the <img> tags or store them in a server to make network requests to render the images. npm install file-loader --save-dev File loader only alters the path name of the file to give it a public URL. It looks for all the imports and urls of images being used and formats the path name accordingly. With the above configuration, the file-loader will load all the assets with the specified extensions, and place the images in the ‘/images’ folder with the format [name]_[hash].[ext]. Below are a couple of examples: background-image: url('images/dog.png') → images/dog_436bd585...png import Cat from 'images/cat.jpg' → images/cat_875ds32132dsda3...jpg However, if we place all the images on a server, the overheads for making multiple network request could dampen the performance. There is an advanced loader that is a wrapper around the file-loader, which is known as url-loader. npm install url-loader --save-dev What this loader does is handle images based on their size. This configuration looks a lot like the one for file-loader, but the interesting part is the attribute ‘limit’. This states the size limit of the image. < 8kb ? <img src='background-image: url(data:image/png;base64,iGwfd..)' /> : images/dog_876bd585bdc8a5cc40633ffefdb7a4a5.png If, in this case an image is less than 8kb, the url-loader will convert the image to base 64 string and inject it to the <img> tag, else it will default to file-loader which will create a public URL to be saved in the server. Images up to a certain size will be converted to base64 without slowing down the application. The limit can be toggled to gauge the most optimised performance for the project. Standard Loader This is an optional loader to help in development. This loader lints code based on Standard JS rules. This loads as a pre-loader and lints all .jsx files, excluding ones in node modules. In 4.0 For using loaders in Webpack 4.0 you still have to create a configuration file, and most of the configurations remains. Plugins HTML Webpack Plugin This plugin helps in the creation of a html file from scratch and injects variables into the HTML file. npm install html-webpack-plugin --save-dev Following is an example of what variables to set. These variables can be used in the index.html file such as: <title><%= htmlWebpackPlugin.options.title %></title> <link rel="icon" href="<%= htmlWebpackPlugin.options.favicon%>"> <div id="<%= htmlWebpackPlugin.options.appMountId%>"></div> Extract Text Webpack Plugin This plugin allows to extract text of any kind to a separate file. This configuration, uses CSS loader to resolve all the CSS but instead of injecting them into the style tags, with the help of style-loader, the plugin will take the CSS string and push it into another file. The extracted file will be rendered in parallel to the bundled file. Doing this has its pros and cons. Let’s go through them. Pros are that having the CSS in a separate file and not in style tags, will obviously reduce the style tags used and consequently the bundle size. With smaller bundle size, the load time will be faster. As mentioned earlier the CSS file renders in parallel which eliminates the flash of unstyled content. However, enabling this extraction will stop hot reload from working. And since we have another file, depending on the size, the compilation time will be longer. It is recommended to have this plugin only for production mode. In 4.0 Extract-text-webpack-plugin is deprecated and is replaced by mini-css-extract-plugin. Common Chunks Plugin Another very important aspect of bundling the project is code splitting. This can substantially optimise the performance of the application. This plugin helps to split out common codes in the project into chunks. These chunks can be loaded on demand or in parallel. This aims to achieve smaller bundle size and loading prioritisation. Credit: Roohiya Dudukela A small js file can grow to be really big which does not bode well for any application. So, it can be chunked into, for example, Multiple entries Chunks with common codes within multiple entries (Lodash) Vendor libraries that do not change as frequently as the main codebase (React) In the above example, we are adding another entry to the Webpack bundle, ‘vendor’. This is chunking out 3rd party libraries. With this separate chunk, our main chunk will be reduced considerably! In 4.0 CommonsChunkPlugin has been deprecated and instead the APIs optimize.splitChunks & optimization.runtimeChunk can be used. This is possible with the new plugin, SplitChunksPlugin. Instead of manually specifying what to chunk, the plugin is smart enough to identify modules that need to be chunked. UglifyJS Webpack Plugin This is a familiar plugin which obfuscates code and handles dead code elimination. Mode Webpack.common.js A common Webpack simply consists of configurations that are common between dev and production: Babel-polyfill Babel-loader CSS & Style loader URL loader So the configuration for Webpack.common.js is: Webpack.dev.js Development Server with hot reload Standard loader CSS loader with sourcemap (for debugging purposes) The configuration for Webpack is: Webpack.prod.js CSS extraction Uglify & dead code elimination Code splitting And the configuration for Webpack.prod.js is: Webpack Merge We can combine the common and dev, also the common and prod with Webpack-merge. In 4.0 Without the need of Webpack merge, we can use the script configuration to specify the -- mode flag and Webpack will take care of the rest. "dev": "webpack — mode development", "build": "webpack — mode production" For a complete react-redux webpack configuration, please take a look at this Github Repo for guidance. So hopefully this has been an enlightening journey, making you feel more under control over Webpack configurations. No more running away from this! References 4 Key Concepts of Webpack | Netlify Webpack is JavaScript module bundler that has taken the world by storm, but a lack of great docs and wealth of…www.netlify.com Extract Text Plugin In the last lesson, we got our styles working all good; getting the css and scss files bundled and then getting the…medium.com Plugins Installation Getting Started Asset Management Output Management Development Hot Module Replacement Tree Shaking…webpack.js.org webpack-contrib/file-loader file-loader — A file loader for webpackgithub.com DevServer Installation Getting Started Asset Management Output Management Development Hot Module Replacement Tree Shaking…webpack.js.org Webpack 4 Tutorial: from 0 Conf to Production Mode (Updated) webpack 4 is out! The popular module bundler gets a massive update. webpack 4, what’s new? A massive performance…www.valentinog.com

Pair-Programming: A ...

Of the myriad Agile software development practices, Pair-Programming is one that has never failed to fascinate me with its effectiveness and simplicity. Pair-Programming, as the name implies, requires TWO developers to work on a single task or story. When adopted as the default mode of operation, an organization must be prepared to pay more upfront (in terms of man-hour costs) compared to traditional models of software development in which only one developer is needed per story. The real question is: Can this be justified so that stakeholders are convinced? On a current product for which the team has the luxury of indulging in Pair-Programming and Test-Driven Development as the default mode of operation, I observed that Pair-Programming has brought about a number of benefits. Ongoing development is never stopped because somebody is not around There is almost zero downtime in day-to-day development for stories. On a regular working day, it is incredibly rare for both programmers paired on a story to take leave at the same time. If one developer should go on leave, the remaining developer could always easily pair up with another developer in the team to continue work without ever losing the context. This mitigates knowledge loss and reduces dependency on any single employee. Code produced is more robust Features are less likely to break downstream due to edge cases (whether during the QA phase or in staging/production). On the current project, uncaught downstream issues, particularly those in production, are expensive to fix as personnel would have to be activated. In this case, Pair-Programming moves manpower costs upstream while improving the application’s robustness. If you are in the finance/telco space, this might be an important consideration since downtime frequently translates to potential penalties from the regulatory authorities. Onboarding time is significantly reduced By pairing an expert with a novice (in terms of system knowledge), pair-programming provides an appropriate setting where the novice can ask questions easily and gain hands-on experience faster. By encouraging human interaction through swapping of pairs, the newcomer will also blend into the team much quicker compared to solo development. Good team practices can be enforced Having developers frequently rotated across different pairs helps to encourage the adoption of important practices like TDD, proper coding standards and version control. Shared responsibility of code commits also results in code that stands up to scrutiny since lazy coding is usually caught during pairing. Potential Pitfalls While pair-programming might bring about the above-mentioned benefits, it is important to note that it does come with its caveats and should not be adopted blindly. Increased development time on some stories Stakeholders, sponsors and development teams need to be aware that pair-programming does not translate to increased team velocity. In fact, the implementation might sometimes take longer due to the frequent discussions that take place between pairing developers to arrive at a suitable code implementation. Pairing when not necessary There are scenarios when pair-programming might not be optimal. For instance, while working on complex technical spikes, pairing might become a distraction when a focused investigation into the codebase is required. Conversely, pairing is overkill on low-complexity stories that are known to only require minor changes. Developer Stickiness Effective pairing has the side effect of developers forming strong bonds with each other. As a result, same developer-pairs would end up working on multiple stories consecutively if a rotation is not practised. Unrotated pairs have the tendency to develop a tunnel vision which reduces the effectiveness of pair programming. Developers should make it a point to switch pairing partners regularly. Incorrect pairing dynamics It’s easy for expert-novice pairings to end up becoming teacher-student relationships. A teacher-student pairing assumes that the student is only present to learn and practise, rather than to produce actual deliverable work. For instance, a ‘teacher’ might constantly rework the student’s code as part of the ‘teaching’ process, or worse still, the ‘teacher’ performs all the coding while the student only takes notes. In such scenarios, fellow developers in the team should step in and offer to switch pairing at the earliest opportunity. ______________________________________________________________________________________________________ Pair-Programming, as with other XP practices, is not a magical pill that will transform team dynamics or software quality overnight. Teams adopting pair-programming for the first time might want to introduce it gradually by applying it to more complex development stories. Pair-Programming promotes the Agile principle of favouring face-to-face conversation and for engineering teams looking to achieve technical excellence, Pair-Programming is definitely something worth considering.

Dissecting Webpack: ...

Many front-end developers shy away from Webpack, much less tackle it head-on. The nature of Webpack is as such, that it has too many configuration options and tinkering with it could break the application. So this article attempts to simplify major concepts in the building of a frontend project specifically with React. Sprinkled in the article are little tips to prepare for Webpack 4.0! What is Webpack? Overview A frontend application has multiple files, .js, .jsx, .png, .jpg, .css, .ttf. We cannot possibly take these bunch of files and dump it in the production web server. The performance load time and overheads will be ghastly. What we need is a single bundled file of js that holds the structure and logic of the components and application, a single file of CSS for styles, and HTML file to render the DOM and an assets folders for images and fonts. There have been many tools and task runners that have helped in the bundling process. But Webpack has emerged to be the one-stop solution for many React frontend developers.                                                                                                                                    Credit: Roohiya Dudukela Setup To get started off, we have to install webpack and webpack-cli. npm install webpack -g npm install webpack-cli -g We also need a basic project to bundle up. For that we can create a new folder called, ‘react-webpack’. mkdir react-webpack We need a package.json file in our project, that would later be required to define our start scripts. Change directory into the ‘my-app’ folder and run npm init. cd react-webpack npm init Create a index.js file that will serve as the entry point to the Webpack. touch index.js In the index.js, we can add a simple console.log. console.log('Hello World') We can now use the webpack-cli to bundle this index.js into dist/bundle.js. webpack index.js dist/bundle.js And the minified code, in bundle.js, would look something like this: ...(function(module, exports) {\n\neval(\"console.log('Hello Worl... *Update In the latest webpack-cli version, Simply calling ‘webpack index.js’ will auto-generate a distribution folder and bundled file called ‘main.js’   The project structure should look like this at this point: But as the code base grows, it is impossible to keep using the ‘webpack-cli’ to bundle the code, for every change we make. So what Webpack allows us to do, is to feed it a configuration object that consists of 5 important features. Entry / Output Dev Server Loaders Plugins Mode Entry / Output First up, we need to create a file for the configuration. touch webpack.config.js In that file, ‘webpack’ library is required, along with declaration of the config object and finally to export this config object that will be fed to Webpack. Next, you guessed it right, we have to populate the configuration object. Context As a good practice, all the source files should be in a separate folder and not in the root folder. Specifying context tells Webpack to look for all source files, starting from ‘index.js’, hereon. This eliminates the need to add dots and slashes, using the relative path, to get to a specific file. To set this up, we require the ‘path’ library to help map out the absolute paths. npm install --save-dev path Entry Since we have set up the context or the base directory, the entry file can be stated as just ‘index.js’ instead of ‘./index.js’. This is where Webpack starts to chart its dependency graph. Output The output file is what the Webpack churns out at the end that contains the compact bundle of the whole application with the dependencies intact. Resolve This attribute is to tell Webpack which files and folders to look into when building its dependency graph. Babel Before we continue to the next step, which involves writing fancy javascript codes, some Babel configuration is needed. Not all browsers are able to handle ES6 syntax. Therefore, we need to bridge the gap by adding a ‘babel-polyfill’ in the entry attribute. For the setup, babeljs.io has pretty neat steps and explanations. Babel-polyfill creates new functions on the fly for browsers that do not have the support for JS code that is above ES5. You can take a look at the browser compatibility table to check if a babel-polyfill is required for the project. This is not the complete configuration for the Babel, we will be adding babel-loader, later on, to provide a complete support for ES6 and above. The following, along with babel-polyfill, need to be installed for React projects. npm install --save babel-polyfill npm install --save-dev babel-preset-env npm install --save-dev babel-preset-react npm install --save-dev babel-preset-stage-0 For a setup of Babel, add a .babelrc in the folder. And add this piece of code: What this specifies is the Babel presets. A preset is a set of plugins that supports particular language features. For example, ‘react’ preset adds support for JSX and ‘es2015’ for ES6 features. To use Javascript features beyond ES7, these features can exist in various ‘stages’. The stages are defined as TC39 categories. So, to enable these features we use ‘stage-0’ as an idea phase or proposal but this helps us to write neat code. { "presets": [ "es2015", "react", "stage-0" ] } The setup up till this moment, should look something like this: In 4.0 No entry and output point need to defined explicitly: it will take the index.js in the src folder as the default. (Though you can still override the default by stating the path in the script in package.json.) Dev Server Setup This goes without saying, we need a development server to launch our application on the browser. Webpack provides us with its own dev server, webpack-dev-server. Installing webpack and webpack-cli to be saved within this project. npm install webpack-dev-server --save-dev npm install webpack --save-dev npm install webpack-cli --save-dev All that needs to be done, is to add a devServer attribute to the above configuration. With this, we will have a simple dev server running that is serving files from the current directory. Run server Now to be able to run the server, we need to tap in the package.json of the application. In the “scripts” attribute of the object, we add a command to run webpack-dev-server with hot attribute enabled. The hot attribute, allows the server to watch for changes in code. If there are any changes done, it will refresh the page automatically. 'scripts': { 'start': 'webpack-dev-server --hot' } To run this, we need to call the command: npm start And voila! We can see the application running on localhost:3000. In 4.0 This is as per Webpack 4.0 specifications as well. For the initial setup, you may look at this Github Repo for guidance. Up till this point, we have only scraped the surface of the Webpack. There’s still much to be explored. We are just getting to the exciting parts. The magic of Webpack lies in the ability to use loaders and plugins, which will be covered in Part II of this series. References 4 Key Concepts of Webpack | Netlify Webpack is JavaScript module bundler that has taken the world by storm, but a lack of great docs and wealth of…www.netlify.com Extract Text Plugin In the last lesson, we got our styles working all good; getting the css and scss files bundled and then getting the…medium.com Plugins Installation Getting Started Asset Management Output Management Development Hot Module Replacement Tree Shaking…webpack.js.org webpack-contrib/file-loader file-loader – A file loader for webpackgithub.com DevServer Installation Getting Started Asset Management Output Management Development Hot Module Replacement Tree Shaking…webpack.js.org Webpack 4 Tutorial: from 0 Conf to Production Mode (Updated) webpack 4 is out! The popular module bundler gets a massive update. webpack 4, what’s new? A massive performance…www.valentinog.com