Navigation path

Left navigation

Additional tools

Archive for March, 2015

Building a better Web Browser….

Tuesday, March 31st, 2015

On March 23rd, 2015, ZDNet and many other specialized IT magazines published articles about the Pwn2Own 2015 contest, whose title was (more or less..) :

“Pwn2Own 2015: The year every web browser went down”

And the summary of the article said “Every major browser showed up (with their last and best version)….. every web browser got hacked”

For those who are not familiar with the Pwn2Onw contest, it is a computer hacking contest that started in 2007 and is held annually at the CanSecWest security conference. Contestants are challenged to exploit widely used software and mobile devices with previously unknown vulnerabilities. The name “Pwn2Own” is derived from the fact that contestants must “pwn” or hack the device in order to “own” or win it.

The first contest was conceived and developed by Dragos Ruiu in response to his frustration with Apple’s lack of response to the Month of Apple Bugs and the Month of Kernel Bugs, as well as Apple’s television commercials that trivialized the security built into the competing Windows operating system. At the time, there was a widespread belief that, despite these public displays of vulnerabilities in Apple products, OS X was significantly more secure than any other competitors…… …… interesting, isn’t it ?

The Pwn2Own contest serves to demonstrate the vulnerability of devices and software in widespread use while also providing a checkpoint on the progress made in security since the previous year.

The 2015 winners of the contest received $555.500 (yes , more than half a million dollars….) in prize money plus the laptops they used to hack (HP gaming Notebooks) and other additional prizes…

The top “hacker” was Jung Hoon Lee (aka lokihardt) from South Korea. He left Vancouver with the impressive amount of $225.000…..yes, nearly quarter of a million dollars and half of the total prize amount for the contest… Not too bad !!!!

But what makes it more impressive is that, traditionally, the prize goes to a team….. but ” our lokihardt” did it as individual competitor, not as a member of a team…. !!!!

All this leads me to the core of the subject of this post: Building a better browser…

A few weeks ago I attended a conference with that title by James Mickens who works at Microsoft Research in Redmon (Washington).

At the beginning of the Wide World Web, the Browser started as an “Universal HTML Interpreter”…. Kind of a “dumb terminal od the past”… with the time a number of “modules” or “features” have been added and the “standard modules” of today’s browsers are typically:

  • The Network Stack : implements transfer protocols: http, https, file, etc
  • The HTML and CSS (Cascade Style Sheets) parsers: validate HTML and CSS code and enforces “a valid format” if pages are ill-specified…
  • The Document Object Model (DOM tree): a browser neutral standard to represent HTML content and its associated CSS
  • The layout and rendering engine: Traverses the DOM tree and determines the visual size and spatial position of every element of the tree
  • The Javascript interpreter: Implements the Javascript run-time and reflects the DOM tree in the Javascript namespace, defining JavaScript objects which are essentially proxies for internal browser objects.
  • The storage layer manages access to persistent data like cookies, cached web objects, and DOM storage,a new abstraction that provides each domain with several megabytes of key/value storage.

One way or the other, browsers has become a sort of “Operating System” since they have:

  • Network (XHR, WebSockets)
  • Disk IO (DOM storage)
  • Graphics (WebGL, <video>)
  • Sound (<audio>)
  • Concurrency (Web workers)

Unfortunately, browser architectures are broken because they are riddled with poor abstractions….. and the consequence is that modern web browsers make it difficult to create fast, secure, and robust programs….

Browsers like Firefox and some versions of IE (ex. IE8) have a “monolithic architecture”. They share two important characteristics; first, a browser “instance” consists of a process containing all of the components mentioned above. In some monolithic browsers, separate tabs receive separate processes; however, within a tab, browser components are not isolated. The second characteristic of a monolithic browser is that, from the web page’s perspective, all of the browser components are either black box or grey box. In particular, the HTML/CSS parser, layout engine, and renderer are all black boxes—the application has no way to monitor or directly influence the operation of these components. Instead, the application provides HTML and CSS as inputs, and receives a DOM tree and a screen repaint as outputs. The JavaScript runtime is grey box, since the JavaScript language provides powerful facilities for reflection and dynamic object modification…..but the so called “native objects” within the browser are not so “grey” and may lead in many cases to not very nice “surprises”…

Is there any solution to the problem?

One of the solutions provided by researchers at the University of Illinois is the so called “OP Web Browsers”. To enable more secure web browsing, they have designed and implemented a new browser , called the OP web browser, that attempts to improve the security in the browser using state-of-art software desing approaches . The do it by combining operating system design principles with formal methods to design a more secure web browser by drawing on the expertise of both communities.

The design philosophy is to partition the browser into smaller subsystems and make all communication between subsystems simple and explicit. At the core of the design is a small browser kernel (micro-kernel) that manages the browser subsystems and interposes on all communications between them to enforce the browser security features.

This certainly represents progress from monolithic architectures since provides better security and fault isolation than monolithic browsers. However, OP still uses standard, off-the-shelf browser modules to provide the DOM tree, the JavaScript runtime, and so on. Thus, OP still presents web developers with a number of “frustrations” when developing “complex web applications”…..

In fact, each browser provides its own implementation of the standard components. These implementation families are roughly compatible with each other, but each one has numerous quirks and bugs. Since a browser’s components are weakly “introspectable” (difficult to know their internal state) at best, developers are forced to use conditional code paths and ad-hoc best practices to get complex web applications running across different browsers……

There are problems with “Event Handling”, “Parsing Bugs”, “Rendering Bugs”, “JavsScript/Dom incompatibilities”, to mention only some….

So the Holy Grail of a “Browser based on Standards” that allowed “Write Once, Run Everywhere” became “Write Once, Test Everywhere” and now is “Write Variants, Test Everywhere” …… What to say …?

Summing up, it is easy to write a simple web page that looks the same and has the same functionality in all browsers. Unfortunately, web pages of even moderate sophistication quickly encounter inconsistencies and bugs in browser runtimes…

James and his team have been working on a prototype of a new generation of browsers called “Exo-kernel Browsers” . Their prototype, called Atlantis, tries to solve the above mentioned problems by providing pages with an extensible execution environment. It defines a narrow API for basic services like collecting user input, exchanging network data, and rendering images. By composing these primitives, web pages can define their own custom, high-level execution environments.

Therefore, an application which does not want a dependence on the Atlantis’ predefined web stack can selectively redefine components of that stack, or define markup formats and scripting languages that look nothing like the current browser runtime. Unlike prior microkernel browsers like OP, and compile-to-JavaScript frameworks like GWT, Atlantis is the first browsing system to truly minimize a web page’s dependence on “black box” browser code. This should make it much easier to develop robust, secure web applications.

The Master Kernel contains the Switchboard Process, the Device Server and the Storage Manager… very simple architecture with a relatively simple API.

Every time a “web domain” (protocol, host name, port) is instantiated it receives a separate isolation container with the kernel and the “script interpreter” (called Syphon). It is done by web applications adding an “environment ” tag at the top of its markup what allows the interpretation not only of HTML but of any kind of markup language of the page’s URL. If no environment is specified , the instance kernel assumes that the page is executed on top of the “Standard Stack”.

The instance kernel contains two modules “The NetworkManager” , that interprets protocols (http, file, etc) and the User Interface Manager (creates a new form and registers handlers for low level GUI events on that form. Also forwards the events to the application-defined run-time, updates the bitmaps in response to messages from the layout engine.

Syphon, the Script Interpreter , is one of the major component of the Atlantis architecture.

Applications pass “abstract syntax trees” (ASTs) to Atlantis for execution (instead of low level bytecode “à l’applets”) for two reasons: one is easier to optimize ASTs than bytecodes and second it is easier to reconstruct source code from ASTs than from bytecode. This feature is particularly useful for “debugging”.

Atlantis ASTs encode a new language, called Syphon, which is superset of the recent ECMAScript JavaScript Specs, but it is described with a generic tree syntax that may be adapted to serve as a compilation target for other high level languages that may or may not resemble JavaScript.

Syphon offers a number of features that facilitate the construction of robust, application-defined runtimes such as Object Shimming , Method Binding and Privileged Execution, Strong Typing, Threading, etc.

The core of the current Atlantis run-time contains, according to James, some 8600 lines of C# (C Sharp) code (Syphon interpreter, instance kernel , master kernel and the IPC (Inter-process Communication) libraries) relying on the .NET runtime for garbage collection, data types and so on. It also includes some 5500 lines of JavaScript for the demonstration web stack and a “compiler” from JavaScript to Syphon AST.

The core of Atlantis provides a very good Trusted Computing Base, enforcing, among other things, the “Same-Origin Policy” and at the same time allows for “Extensibility” allowing web pages to customize their own runtime in a robust manner.

In the lab, the Atlantis prototype has demonstrated very decent performances and this despite the fact that it has not been optimized, whet looks very encouraging.

To sum up all the above, current web browsers must support an API that is unnecessarily complex. This API is an uneasy conglomerate of disparate standards that define network protocols, mark-up formats, hardware interfaces, and more.

Using Exo-kernel principles ,like in Atlantis, allows each web page to ship with its own implementation of the web stack. Each page can tailor its execution environment to its specific needs; in doing so, the page liberates browser vendors from the futile task of creating a one-size-fits-all web stack.

The approach proposed by James and his team looks very good and will facilitate the development of robust and secure complex web applications….. so far so good.. my question to James was: Why there is no so much progress in this area?

There are a few reasons for it:

  • The browser technology is well known and developers got used to it
  • Browsers are today compared basically based on the speed of the JavaScript and Java machine
  • There is not the perception yet that we are reaching the limits of the current technology …

According to James, one of these days we are going to have big, very big problems and then things will have to change….

And this is one of the the reason why I started speaking about the ZDNet article…..

A personal reflection……during a Windows 10 focused-keynote, in January 2015, Microsoft unveiled that IE will be deprecated and there will be a new “standard” browser included in Windows 10. Its code name is Spartan…. we know already that it will not support “legacy technologies” such as ActiveX and Browser Helper Objects and will use an “extension system” instead and will increase its compliance to standards… IE11 will stay in parallel for some time to support “legacy systems”….

The question is “Will Spartan ever become an Exo-kernel Browser”? …… or Atlantis will be just a “research project”….and stay there?

Time will tell…… as usual !!!

Stay tuned for more….

Best

 

Paco

Organizing a Hackathon @ UC Berkeley: Epilogue….

Tuesday, March 24th, 2015

In ended that last part of the “Hackathon@UC Berkeley” saga with the following sentence:

“Sending the winning team members to Barcelona is being quite an adventure….all kind of issues: passports missing, different origins, destinations and dates, changes in dates after the ticket has been issued, etc… I will sleep well when they arrive in Barcelona and even better when they will be back in Berkeley…”

Well, I was right… it was difficult to sleep throughout the week before Barcelona ….

One of the members of the team had earlier commitments and asked to leave from Los Angeles and come back to Washington DC…. and he only would arrive March 2nd at 7:00… the event in Barcelona started that day at 9:00……

The second member of the team had already his ticket but he had a mid-term exam and the professor did not accept to postpone it… new tickets needed…. he found a flight to leave on time and come back on time… Uffffff…

The third member of the team did not have a passport… no passport number , no way to book his flight….. I thought that getting an urgent passport was a simple and quick thing to do in the States, if properly justified… well I was wrong, it is not as simple and it is expensive compare with what we have, for instance, in Spain…..

The Barcelona event would start on Monday and here we are with one of the members of the team with the passport to be picked up Friday afternoon… to fly during the evening.. imagine the problem with 95000 people attending the Mobile World Congress from everywhere in the world…we managed to find a flight… departing from Oakalnd with two stop overs, exhausting….

The guy got , finally, the passport on Friday afternoon and headed to the airport to take the flight to Barcelona… so far , so good……except that he went to the wrong airport !!!. When he realized it he rushed to the other airport and missed the boarding for just a few minutes….

Here we are on Friday at 22:00 trying to find a solution…… and he was the main coding engineer in the team.

Finally , he found a flight.. but it would only leave on Monday to arrive Tuesday evening… Ok…. except that Wednesday at 12:00 the teams had to check-in the solutions and present them… I said to myself….we need a miracle or it is not going to work…….

I asked the team in Barcelona to keep me informed on the arrival of the team members and Luke and Alic. They were arriving safe and on schedule and the team of two was working in close contact with the one still in Berkeley…. He arrived on Tuesday evening as planned and started working with the others … I suspect that the team did not sleep a lot that night…..

When I got up on Wednesday, I asked Barcelona about the final result since it was already Wednesday evening  there …. and the winner was ….. the application InTime by … the Berkeley Team !!!

It was just incredible… one of the members of the Barcelona team was also asking, via the Whatsapp Group, about the winners … when she was told that the winner was the Berkeley Team, she could not believe it … she said “the Berkeley Team?.. No, it is not possible, yesterday evening their application was not working… ” one of the members of panel, said… “Yes, it worked when they presented it….”….

Apparently the presentation was awesome as well…; so….. well done David, Andrew and Jessie!!! ….miracles happen…!!!… but also it shows the quality of the Computer Science students at UC Berkeley…

More details and pictures (in spanish) about the event in Barcelona can be found here.

I could finally sleep well !!!! ;-))

I asked Luke and Alic to give me their impressions, in writing, about their experience in Barcelona and Granada (Alic), I copy them below:

” It was a thrill to see technologies and professionals from around the world converge in Barcelona for an entire week. The diverse nationalities, languages spoken and projects showcased were all interesting. Since I’m only 20, this was the first time that I witnessed a world-class industry conference and it brought to perspective how dynamic the global tech scene really is.

In a place like Berkeley, I think it’s easy to take certain things for granted. For example, we’re very close to the Silicon Valley and we have access to a great ecosystem for tech and startups. However, I think there’s a lot to see and learn from other parts of the world. Going to Barcelona and experiencing the Mobile World Congress, as well as 4YFN, was eye-opening for me and made me more curious about opportunities and possibilities outside of the Bay Area.

In terms of culture, food, and entertainment, Barcelona didn’t disappoint! I never had tapas before going to Barcelona and now I crave it here in Berkeley. I also hit some tourist spots like Park Guell, La Sagrada Familia, FC Barcelona stadium, Museu Nacional d’Art de Catalunya, and more.

I definitely want to go back and explore more of the city. Hopefully sooner rather than later! ” (dixit Luke)

” It was a wonderful experience getting to see hackers from Barcelona, Berkeley, Cordoba and Granada all come together in one place to compete and develop creative apps for smart watches. The teams not only had the opportunity to learn from each other, but also got to know each other over the course of the hackathon.

The 4YFN and Mobile World Congress were definitely amazing (and sometimes overwhelming) exhibitions of some of the newest technological advancements coming to market. It was definitely a very diverse global event where companies that ranged from local startups to multi-national corporations were able to showcase their newest tech. There was everything from the latest health monitoring devices to completely waterproof electronics, and just a plethora of smart watches.

I was very impressed by the development of and investment in infrastructure for health technologies in Granada. The completion of the Technology Park of Granada combined with the entrepreneurial spirit and technical talent in the region may position Granada to be a leader in the space of health tech and biotech.” (dixit Alic)

I want to warlmy thank Alic, Luke and the volunteers for their help on this, I have really appreciated working with them towards a successful completion of the Hackathon @UC Berkeley

And … congratulations to the winner team: David, Andrew and Jessie !!

Best

Paco

 

 

 

 

 

 

Code for America and OpenOakland.org…. Part 1

Wednesday, March 18th, 2015

It was end of September 2014 that Heddy , my office colleague, pointed me to an event of Code for America (CfA) in San Francisco. When I searched on the web, I discovered that it was their 2014 Annual Conference… unfortunately I had just missed it since it had taken place just a few days before…

I continued searching their web site and started learning interesting things about them. Code for America was created back in 2009 and the main player, and founder, behind CfA is Jennifer Pahlka . In her TED.com speech of 2012, “Coding a better Government” , she said that she created Code for America to get the rock stars of design and coding in America “to work in an environment that represents everything that we are supposed to hate….., to work in Government” …

As stated in their website, “Code for America believes government can work for the people, by the people in the 21st century”. Code for America calls ” engineers, designers, product managers, data scientists, and more “to put your skills to work in service to your country. Let’s bring government into the 21st century together”.

Code for America runs five programs:

  • Brigades: local groups of civic hackers and other community volunteers who meet regularly to support the technology, design, and open data efforts of their local governments
  • Fellowships: small teams of developers and designers work with a city, county or state government for a year, building open source apps and helping spread awareness of how contemporary technology works among the government workforce and leadership
  • The Accelerator: provides seed funding, office space, and mentorship to civic startups
  • Peer Network: for innovators in local government
  • Code for All: organizes similar efforts outside the US, particularly Brigades and fellowship programs in countries around the world

I contacted Code for America through their info mailbox and after a few days I was having some feedback. It took some time until I could visit their premises in San Francisco and meet with Catherine Bracy, Program Director in charge of International Relations. She explained to me how Code for America is organized, working methods, how it gets funded, projects they are particularly proud of and are considered best practice and invited me to participate in some meetings of the Brigades and pointed me to the ones being run in Oakland (OpenOakland.org) and San Francisco. She also put my in contact with Code for Europe.

OpenOakland defines itself as a non-profit civic innovation organization that brings together coders, designers, data geeks, journalists, and city staff to collaborate on solutions to improve the lives of Oaklanders. It is part of Code for America’s Brigade program and holds frequent events for community, local government and tech folks to work together.

Open Oakland focuses on both community technology and open government projects that are supported through community partnerships and engaged volunteers.

Searchingthe web for references and background information on my research, “Co-production in Public Services”,  I found a “Meetup” call from the OpenOakland.org Brigade and I decided to send a request for participation. I quickly received a few replies welcoming my presence and I participated in one of the Tuesday’s “Civic Hack Night” that takes places in one of the meeting rooms of the City Hall in downtown Oakland.

I arrived there at 18:15 and there were very few people in the room, I was greeted by Neil who told me to take a sit, relax and wait for the rest of the people to arrive.

People were arriving regularly and towards 18:30 the room was nearly full (roughly 60 people). The environment was relaxed; some people were already having dinner from boxes they had brought with them.

Spike, one of the Captains of the Brigade, opened the meeting with a few introductory words about the Executive Committee and then gave the floor to the representatives of a number of projects to report on progress.

One former Code for America fellow, apparently now working for a civic engagement oriented company, had ordered some pizzas and before the different project groups gathered scattered around the room, it was “pizza and candy time”.

Gradually people were sitting together to discuss about their projects. A lot of conversations going on at the same time and despite the parallel conversations people in the groups were much focused on their subject, a lot of activity was going on and interesting discussions were taking place.

I decided to start by sitting with Neil and Ronald to tell me about OpenOakalnd.org, the origins, the role, the projects, the decision making, the results, the relationships with other civic organizations and the challenges. They were asking me about my research project and why I was so interested in the Brigade. I told them that they had a very good reputation in Code for America. They said that, in some aspects, they were more advanced than other Brigades and therefore they need now less assistance from Code for America.

OpenOakalnd.org had just created an Executive Committee with 11 members that would have soon an “Away Day” to better know each other and start moving ahead.

Neil , an Irish born but Oakland resident for many years, told me that he have been involved in civic activities in his neighbourhood for some years and thought that his experience could be useful to the Brigade. He explained to me that he does not hack, but he is in support of the projects and the Brigade activities. He said that it would be good to have more contacts with other civic organizations in the city. Ronald, a specialist in leadership who had worked for a NGO for many years, tries to put some framework in the projects and activities of the Brigade and supports the Executive Committee in several matters. I told Ronal that I would like to meet him speak about his ideas.

We spoke about Fellowships and Neil called Eddie, the second Captain of the Brigade, who had been a Code for America Fellow a couple of years before. We agreed that we would have lunch together to speak about it.

Very close to Neil and Ronald’s group there was another one discussing about the possibility of building a system to collect applications for summer jobs for teenagers in Oakland. There was a government official with them and the project group were showing some web sites that could serve as a template for the system. According to the posts on the Google Groups of the Brigade, it was decided not to develop the project for the reasons I will explain in another post.; it illustrates the maturity of the Brigade in terms of decisions regarding the projects.

I asked Neil and Ronal about the attitude of the IT Staff in the City Hall regarding the activities of the Brigade and the applications resulting from the projects. They said that IT staff is so busy with the normal work and their resources are so scarce that they have enough work keeping the lights on and carrying out the existing activities. They do not have any special problem with the work of the Brigade and when requested they are , most of the time, able to deliver data they may have that would be useful for the applications.

It is my understanding, from my informal conversations with developers involved in “hacking for government”, that they think that the IT Staff in City Halls or other departments in Government, would probably not be able to develop the kind of applications that the hacking projects are producing or if they did they would take a lot of time and resources due to the way projects are managed.

There was another group focusing on “transparency” that was being chaired by another city hall official working for the “Ethics Commission”. The brigade released in September 2014 a web application called “Open Disclosure”, which provides campaign finance framework data that shows the flow of money into Oakland mayoral campaigns. They were now working on the extension to other campaigns.

Finally, there was another group discussing about “marketing” of the activities of the brigade; they were writing the ideas on big sheets of paper sticked to the wall. Other group was discussing about a project related to housing in the city. The discussion was very technical and I uderstood it was about how to best display the information.

Phil Wolf, who is also member of the Executive Committee,and very active in the Google group, asked me before leaving the place to write a post about my experience….

Here it is …. And others will follow !!

I am very happy with the experience and grateful to OpenOakalnd.org for their welcome and help.

Stay Tuned

Best

Paco

Cloud Infrastructure Planning @ Google…..

Saturday, March 14th, 2015

Today, three guys from the “Operation Decision Support” group at Google came to campus to recruit interns and full employees. …. surprise, surprise

It looks that they were very happy with one of the summer internals who came from UC berkeley and came again to introduce their department and describe their challenges and working methods.

They started by presenting the three business areas of Google:

  • The 100 billion dollar business: Ads, Search, Access (G-Fiber), Public Cloud
  • The 10 billion dollar business: You Tube, Nest, Play, Android, Chrome
  • The “bold bets”: Life Sciences, Self driving cars, Energy, Robotics, Space X

Pas mal… as they say in French !!!

The Operation Decision Support (ODS) Group is a kind of “consulting group” in Google specialized in “Capacity Planning” and impact analysis on costs and pricing of the solutions and the involved resources. This is particularly important because they have to invoice external customers and ….. charge back the internal ones …..  sounds familiar?  😉

Google has 12 Data Centers around the world, 6 of which are in the United Sates and 4 in Europe. Google expends 7 to 8 billion dollars in infrastructure per year… big, big money !!!

ODS has 50 people, among which there are 20 PhDs specialized in Statistics and Operational research. they also have experts in  modeling and supply chain. The group is becoming very important to the company; they plan to recruit between 15 and 50 new employees this year.

They presented a couple of interesting problems to illustrate the work of the group related to capacity planning, utilization of resourrces and costs.

The first one: “Increasing the utilization of the infrastructure (CPU, memory, disk space…) through oversubscription” (internal and external customers)

Google has Tier1 and Tier2 customers with different SLAs that normally subscribe for specific capacity that is not fully used all the time. The problem to be solved is how “oversubscribe” to sell capacity to more customers (internal or external).

There are three ways of approaching this problem:

  • Easier: Resell surplus in Tier1 and Tier2 (which on average use around 25% of the contracted capacity) with no SLA for the overcapacity sold.
  • Harder: Resell surplus in Tier1 as Tier2 with SLA
  • Hardest: Oversubscribe Tier1 with no change to its SLA.

In the first case, utilization changes with the time zone, there are peaks and valleys and there is no SLA, no guarantee, no problem.

Perhaps some “guarantee” could be provided by statistical extrapolation methods. For instance , for batch processing, it could be guaranteed that the batch is executed in the next 24 hours.

In the second case, it is necessary to collect detailed utilization data to estimate growth and security margins (Safety Stock) to guarantee SLA.

In the third case a more sophisticated analysis of the time series of data of every task run in the Tier1 environment. Workloads per task , in general, do not peak simultaneously what allows for a predictable “surplus” to be sold if some safety stock is taken.

It looks easy but , Thomas Olavson, the director of ODS, says that it is not evident. So, how to make this approach acceptable, taking into account that the final decision is in the hands of the implementing department (engineering, production ,etc) or the executive team?. Here is the method:

  1. Partner with the engineers: Fully, understand the issue, work together, pilot before roll out
  2. Build credibility and trust over the time
  3. Overcome “taboos”
    1. Clear SLA
    2. Explore Tier2 with statistically based SLAs
    3. Demonstrate economic impact
    4. Pilot, pilot and pilot.

The second case was related to the deployment of G-Fiber. Google Fiber is the fiber-to-the-premises service of Google in the US, providing broadband Internet and television to a small and slowly increasing number of locations. The service was first introduced to one of the biggest municipalities of Kansas City and Missouri, followed by expansion to other 20 Kansas City area suburbs within 3 years. Initially proposed as an experimental project, Google Fiber was announced as a viable business model on December 12, 2012.

Google is ,at the end of the day, a Content Service Provider and wants to provide high quality content at optimal speed to the users to increase satisfaction. A solution to it would be that the Connectivity Service Providers plug their “pipes” directly to the Google Data Centers what is unrealistic since they are normally in remote places. Therefore the solution is to bring Google infrastructure close to the users and this is exactly what Google is doing with the G-Fiber Service.

Answering the question where and when to build what infrastructure , is a tough optimization problem…. For a not very complicated deployment, the model would have some 30000 variables and more than 30000 constrains….

Brian Eck, now senior consultant at ODS, a former IBM employee who has been working with Google for the last two years (he says as a joke that the two years have been like “dog years” since he feels that he has been working at Google for 14 years !!), is a specialist in logistics and he was confronted to the same problem in manufacturing at IBM and concluded that the optimization approach was not the way to go….

Instead, he and his colleagues have developed a “Scenario Analysis Tool” for a reduced number of locations translating the alternative deployment roadmaps into a five year cost/cash model. The inputs for the model are the demand and the topology provided by the engineering team, the equipment footprint (calculated)and unit cost of all the cost components , also provided by the engineers. The result is a cost model with total cash flow over 5 years.

They call the model a “Big Special Purpose Calculator” which is also very useful to study “What if” scenarios and that can be generalized to other kind of problems in Google (some “super users” are doing it already).

One of the decisions that have to be taken in a deployment of this type is if it is better, given the cost of the workforce including travelling, either to install overcapacity in locations now and come back in one or two years to update, or to set up a local team and visit periodically and upgrade as necessary.

Applying the model to a specific deployment case, the latter option allowed 10 M$ savings …..

The model was first implemented using a spreadsheet; it contained 60 worksheets with some 300 line each and very complex formulas but allowed the fine tuning of the model. Once it was done, it was implemented using the R Statistical package.

The critical success factors are not very different from the ones mentioned in the previous case but here, there are additional ones

  • Strike the right level of detail; “what to include what to omit”
  • Standardize data: power, colo contracts, workforce, etc

Once more a very interesting talk… it is amazing what is going on in the Bay Area..

Students were queuing to hand their CVs or get the contact point….  I wish I could….  😉

Stay tuned for more…..

Best

Paco

 

 

 

 

Fully Automated Driving…. When? How? What is missing? ……

Monday, March 2nd, 2015

Last Friday  the guys from Bosch came to campus invited by the UC Berkeley EECS School.

Bosch, like BMW or Mercedes, has a Research Centre in Palo alto where the engineers are creating the “Vision and the Roadmap” for automatic driving. The Centre participated in the Urban Challenge organized by DARPA and since 2010 is prototyping systems for the project.

When speaking about “Automated Driving” one have to distinguish between “Supervised by the driver” (some technologies are available today such as Park Assist, Integrated Cruise Assist and others, like Highway Assist, are progressing fast), “Highly Automated” (Highway pilot) with reduced driver supervision and “Fully Automated” (Auto pilot).

When we will see fully automated cars on the market? According to Bosch, it is likely that in 2020 we will see the first commercial prototypes of “Highly Automated” cars. No date for “Fully Automated” cars can be forecasted today…

But what is missing ?

  • Surround sensing… in all circumstances !!!
  • Safety and security
  • Legislation
  • Very precise and dynamic map data
  • Highly Fault Tolerant System Architecture (what happens if the battery dies???)

Let’s examine briefly some of those aspects…

Surround sensing:

Today, 360° surround sensing is possible with the use of radars, sensors and cameras but there are issues in special circumstances … what happens tunnels, low sun or with some materials like timber f they are transported by trucks?

What is missing, among other things, is what is called “Third Sensor Principle” in excess of radar and cameras. Sensors that work in real time, asynchronouly, using probabilitic algorithms and computationally very efficiently with supervising systems that are able to decide in cases of conflicting information. In fact, a new generation of sensors…

Dynamic Map Data

Today, most of the map data we have in our GPS is mostly static. What is needed is absolute localization data on maps with dynamic layers, much more precision and SLAM (Simultaneous Localization and Mapping).

Safety

The driver has to be monitored to detect distraction, drowsiness, Health state, etc. Identification and adaptive assistance are also necessary and also the ability to return control if necessary and this is a Key Element for this part.

Security

Protection against technical failures by means of redundancy in the steering and braking systems (some elements like assisted steering, ESP HEV and iBooster exist already today, particularly in electric vehicles).

One important aspect of security is quality control and testing for release. In traditional cars the quality control is done statistically but this method will not be feasible for the testing of fully automated driving vehicles. It is estimated that the number of test hours would be multiplied by a factor of one million. New release strategies are needed with a combination of statistical validation and new qualitative design and release strategies for individual components and the full integrated system…

Legislation

Currently laws regarding traffic and car driving are enacted at National level. However , there are two international conventions, Geneva (1949 UN) ad Vienna (1968), on road traffic… the problem is that some countries have ratified one and not the other… or neither of two !!!!

Needless to say that , like in many fields of technology, legislation does not reflect easily and quickly the technical progress….

To illustrate it, it looks that in the Vienna ( or Geneva, I do  not remember..) Convention it is stated that

“Every driver shall , at all times, be able to CONTROL his vehicle or GUIDE HIS ANIMALS”…. one can imagine how modern thsi rules are when it speaks about.. ANIMALS…

The key here  is the meaning given to the word “CONTROL”… if taken literally there is no possibility of “Automated Driving”… but what if CONTROL would mean “SUPERVISION” ?

It looks that the state of California has accepted the “testing” of such kind of vehicles based on well justified request and with “certified” people… at least research can continue… well done.. I am sure that Google has something to do with this… ;-)))

One of the questions that is often raised is “What will be the User Experience (UX)”?

What the driver will feel?, Emotions? Transition of control back to the driver? ….

It has become clear that the automotive industry has become a hardware/software industry , the mechanics is still important but the car is full of IT systems that have to work in an integrated way and with very fast response times.

This is even more important in the case of full automated driving and the software has to be fault tolerant, secure and very efficient.. …imagine the power of the embedded processors to take reactive action in milliseconds to determine the trajectory… and at the same time, but a little bit slower, say in  seconds, decisions about the manoeuver…

Bosch displayed a video that illustrated their vision for Highway Pilot by 2020. I will point to it as soon as  that upload to their web site because is very interesting…

The meeting, as usual, ended with a request to EECS students to send applications for jobs/internships at Bosch Research Centre in Palo Alto..

At the end of the meeting, I asked some questions

Question: “There are initiatives by Google in this field and recently the press has published that Apple might have 1000 engineers working on the subject… are you working with them?

Answer: “We are not authorised to speak about the collaboration with partners”…..

Question; “Bosch is not a car manufacturer, what is your business model for this technology?”

Answer: “Bosch is a manufacturer of components or complete systems for traditional or electric cars. We are going to continue with the same approach”

Question: “If there are several suppliers of components or subsystems on the market, and the car manufacturer decides to have multiple suppliers, they will have to work together in a mission critical framework. Standards will be needed; what is the current situation?”

Answer: Today there are standards at low level that allow communication among components and subsystems. Higher level standards will probably be needed in the future but which and when is difficult to say today. having said it, we believe that the automotive industry will find the necessary agreements to ensure interoperability at least at some level”

One of the students asked a very interesting question: “Will automatic driving avoid today’s typical accidents? There will be more? Or Less?

Answer: “100% safety will never exist. The first accidents, when the full automated driving will be available, will produce big headings on the press. This is not new: the introduction of safety belts, airbags ,etc in traditional cars was criticized at the beginning; with the time and the technological progress the criticism has mostly disappeared; nobody would accept today cars without those safety elements and legislation is enforcing them . As far as the number and type of accidents are concerned, Bosch believes that there will be less accidents and those will probably be somehow different to the accidents in the case of human driving…”.

Very interesting conference on a new subject…… at least for me…

I spoke to the Director of Research at the end of the conference and we agreed that I will visit the Center when I will go to Palo Alto, Mountain View or Menlo Park for other meetings in March or April

Stay tuned for more…

Best

Paco