Why Big Fog?
As the IoT evolves, organizations are quickly moving beyond focusing on one IoT device at a time, and are now often required to manage and coordinate large numbers of IoT devices. Use cases include:
- Coordinating and optimizing the operations of drone swarms, for example during military operations or oil drilling explorations
- Optimizing the flow of vehicle traffic in conjunction with smart signal decisions within a Smart City infrastructure.
- Managing the routes of autonomous vehicles on farms and in other contexts that are safe enough for the current state of the technology
- Using AI to determine how to coordinate micro-satellites with terrestrial assets in order to achieve the overall best data transmission rates
- Developing knowledge about the behaviors of population segments within a Smart City by learning from arrays of smart sensors and cameras
- Optimizing the interactions among mobile robots that work together to produce and transport products
- Protecting valuable private data that is generated by personal transport vehicle users (e.g. sidewalk scooter users) while still providing useful insights to police and other departments within a Smart City
- And many more use cases
This trend will only continue to grow for the foreseeable future. In fact, this will become the predominant type of activity throughout the IoT and the entire world over time.
However, until now there have only been two means of dealing with the expanding scope of IoT-based use cases: uploading Big Data and what might be called Little Fog. Neither of these approaches is ideal.
With Big Data, larger and larger amounts of data must be uploaded, faster and faster. This has already begun to overload enterprise and cellular networks, while running up the costs for cloud services that store and analyze terabytes of data. Meanwhile, 5G is in its infancy, and by the time it is fully developed and deployed much more widely, the inevitable historical pattern will recur: the increase in data usage surpasses and overwhelms the expanded capacity of the communication networks. And as the amount of data grows, the costs associated with large-scale cloud-based data analytics solutions continue to increase.
Little Fog − by which we mean any fog-based solution that has a relatively small scale of operation − is not capable of solving these challenges. In fact, we know of no public discussion around the notion of dramatically scaling up the fog-based approach. Rather, the assumption appears to be that the fog can only handle relatively localized problems, and will typically only be used to increase response time (reduce latency) and to reduce the load on data centers for those operations that can be performed dynamically in the field. Further, the assumption has been that fog-based solutions will primarily deal with the type of data that only has short term value − even though it is perfectly feasible to store vast amounts of data with long term value directly within the fog, for example on inexpensive micro-SD cards that will be ubiquitous within IoT devices.
So, what is Big Fog?
The term “Big Fog” was coined by this author (Stan Stringfellow, Founder/CEO of PlasticFog Technology Corp.) who apparently simultaneously coined the verb “Out-Paradigms”. With Big Fog, very large-scale operations can be conducted and optimized in much closer proximity to the network edge, often directly on the edge nodes (the smart things) themselves. This type of solution can optionally have a cloud-based component, such as a command and control center, but the data and the vast proportion of the computational processing can remain at the smart hubs, micro data centers, edge devices, and other edge/fog-based nodes.
If you already have an app that manages groups of IoT edge nodes, you might want to integrate PlasticFog into your management functions. This will enable your app to scale while providing the benefits of fog/edge-based operations. In addition, your apps will be able to guarantee the optimal solutions for your users.
But, how is it possible to scale up this type of approach without overwhelming the IoT networks with inter-node data traffic? After all, there has to be a way of solving the “overall problem”, and that requires wholistic knowledge which can only be generated by somehow combining the separate aspects of knowledge that live independently on large numbers of individual IoT devices.
How Big Fog works
PlasticFog is a project within the highly-regarded COIN-OR Foundation, which was originally open sourced by IBM. PlasticFog leverages decades of expert algorithmic development effort, and applies the results in new and innovative ways, in order to create the only Big Fog solution for the IoT. We work in partnership with algorithm experts at Lehigh University (PA), a world-class research university.
Essentially, PlasticFog creates widely-distributed “pricing markets” where the participants (the smart things) balance out the “costs” among themselves. These costs are not necessarily in dollars and cents, although they may be. Costs can be any application-specific variables that need to be optimized: fuel usage, time required to complete a task, current distance from a desired geolocation, amount of computational resources that would be required to perform a given task, or even more abstract concepts such as the “weights” within an artificial neural network that is being trained.
The COIN-OR projects are used in thousands of applications around the world, usually behind the scenes within libraries such as R, SAS, Python SciPy, and so forth. PlasticFog leverages the COIN-OR project Discomposition for Integer Programming (DIP). However, the entire COIN-OR suite of solvers and supporting libraries are provided within a comprehensive architecture which PlasticFog is able to benefit from.
PlasticFog uses a method called mathematical program decomposition, and applies this methodology in the new context of the IoT. The fundamentals of decomposition are beyond the scope of this guide. Fortunately, it is not necessary for a PlasticFog developer to have a deep understanding of the underlying theory. Suffice it to say that we believe this is the only workable approach for creating Big Fog solutions. Other approaches won’t work:
- Rules engines won’t work. In general, a rules engine (which contains a set of rules in the form: if-this-then-that) is a very bad idea. It is next to impossible to maintain and scale rules engines because changes to one rule tend to propagate and affect other rules in unforeseen ways. This makes rules engine-based apps rather rigid, and difficult to integrate with external applications. This is a big disadvantage for the IoT, where integration of thousands of solutions will be necessary over time, even dynamic integration. Also, rules engines suffer from a problem that virtually all algorithms run into when trying to optimize systems (unless the algorithms have very special properties, as is the case with the COIN-OR solvers). That problem is, they may find locally optimal results for some parts of the overall problem, but they can’t be depended upon to find the globally optimal result. For a business problem, this might mean that some costs are kept to a minimum, here and there, but across the entire solution there is always inefficiency and wastefulness.
- Cloud solutions won’t work, by definition: They are not Big Fog. Cloud solutions will always be important. But, they require uploading of data in order to form a wholistic view of the real world. And this view is never the actual world in the present moment, but rather a snapshot of the world as it existed at some point(s) in the past. There are advantages and disadvantages to IoT cloud solutions. But, the well-known advantages of fog/edge-based solutions can only increase in importance as the quantity and speed of the data increases. These advantages include: fast reaction times (reduced latency), data privacy protections, reduced network costs, reduced Big Data costs, and leveraging the compute and storage resources (the sunken costs) that exist within the fog and at the edge.
- Token passing (gossip spreading) and Swarm Intelligence won’t work for most IoT problems. Consider a swarm of drones in a military or police operation. The entire swarm needs to react immediately, as a whole, to events that may occur anywhere within the environment or anywhere the within the swarm, itself. There isn’t time for learning and reaction to spread by osmosis. It is true that this approach can create a large-scale learned model, if the type of data that the system gathers reflects the factors that are needed to make inferences, and if the data samples capture a sufficient number of successful and unsuccessful scenarios. But, being dependent on this type of learning has some important drawbacks: It isn’t flexible. It is necessary to retrain the entire solution if it is to react to new and different scenarios. It is also hard to integrate with other solutions, unless they closely match the type of input that the solution in question was trained with. Also, deep learning is generally not very effective when the problem is complex, as is the case with most large-scale IoT problems. Deep learning works best when the factors are relatively straightforward, and there is a lot of repetition. This is not to say that deep learning is not important for large-scale IoT applications. But, the resulting models should be flexible and easily retrained. PlasticFog can support large-scale fog-based deep learning, although currently it is necessary to implement this process, similar to developing an application. This isn’t extremely difficult, but currently it doesn’t just work out of the box. The important point, however, is that with PlasticFog, the deep learning models are built on, and can feed back into, an underlying widely-distributed optimization methodology which is extremely flexible, and which can be examined and tweaked by developers (it is not a black box, like an artificial neural network). This means, for example, that deep learning can augment or improve a solution over time, but is not necessarily required for the basic functioning of the solution. And, in the meantime, the flexibility and directly manipulable nature of this approach enables much easier integration with external systems.
- Catch-as-catch-can won’t work. It may be tempting to think you can use “rules engines here and there”, perhaps filter some data at the dge, do some streaming data analysis, and then mix all that with some cloud-hosted analytical system etc., and get it to work out for your app. It is, of course, possible to do all those things and combine them together. But this approach doesn’t (in and of itself) overcome the problems described above. If the app needs to coordinate/optimize IoT node activities at scale, and perhaps learn from these interactions, this approach runs into the same issues described above.
- Commercial optimization solvers won’t work. There are other approaches to solving optimization problems besides the mathematical programming approach used by the COIN-OR open source solvers (the leading solvers for this type of approach). These other approaches are generally implemented within commercial solvers, and although they support parallelism, they can’t be deployed as widely-distributed systems that work over IoT networks. These types of solutions work well in computing clusters. They generally employ message passing, which requires a very fast and very cheap networking capability, usually a mid-plane within a chassis that contains high-performance blade servers. This is essentially a different type of animal from the IoT.
- The PlasticFog approach will work! But, it requires that you architect your solution appropriately.
Architecting a PlasticFog solution
Consider the following application scenario:
- You have a smart robot-vehicle that picks items within a warehouse, loads them onto pallets, and moves the pallets to a loading dock. This robot-vehicle has a number of completely independent considerations:
- It must choose the optimal order for picking products off of shelves
- It must conserve fuel costs by optimizing its routes around the facility
- It must adhere to various additional constraints, such as picking products only when they are in stock, and not wasting time on orders that can’t be filled at the current time
- You also have a smart truck that transports products from warehouses to various destinations. The truck has a number of completely independent considerations:
- It must consider it’s geolocation when determining the best routes to take
- It must also consider traffic conditions along the potential routes
- It must make sure it has the required loading capacity available to transport the products that it will be picking up
- It must consider its fuel levels and other factors that determine how effectively it can complete any jobs that it decides to fulfill
- However, BOTH the truck and the robot-vehicle have a consideration in common:
- They are both expected to meet at a particular loading dock, and exchange a specific pallet, at a given date/time.
With PlasticFog’s underlying methodology — called “mixed integer linear program decomposition”, or MILP decomposition — the following terminology is used:
- The independent problems are called subproblems.
- The problem that links the subproblems together is called the restricted master problem (RMP). And the specific constraints within the RMP are called linking constraints.
In order to design PlasticFog apps, you should think of your app as an RMP and a set of subproblems. To design apps most effectively:
- Try to design your apps so that most of the constraints lie within the subproblems, while limiting the constraints within the RMP as much as possible. In the example above, the robot-vehicle and the smart truck managed most of their constraints independently. And, there was only one linking constraint in the RMP. This is, of course, a simplified example. But, it is expected that many IoT apps can be designed to adhere to this pattern, where most of the constraints are local to IoT edge nodes, with relatively fewer linking constraints. It is possible to scale apps that have many linking constraints. But as an app like this grows, it will likely require some technical effort to tune the PlasticFog back-end to leverage specific characteristics of that app.
- Whenever possible, make each subproblem correspond to a particular IoT edge node (a smart thing). For example, if you have 10 robots, each robot should become a subproblem in your PlasticFog app. The RMP might run on a nearby network hub. Try to design your app so that most of the data that a subproblem requires is generated locally on the device. For example, a smart truck that is choosing which route to take would need to consider its geolocation, which is a locally generated data point derived from a GPS radio.
- You can create multi-level hierarchies. For example, a smart hub that serves as the RMP for a group of robots (the subproblems), might itself be a subproblem within a larger hierarchy. It’s associated RMP might run on a micro data center, or perhaps within the cloud. However, these kinds of hierarchies can slow down the overall performance of the system. So, you should understand the performance considerations before expending too much effort to develop apps around a multi-level hierarchy. Note that there are many ways to tune the design of multi-level hierarchical PlasticFog apps in order to make them scale more effectively. Given the current state of the technology, some of these considerations are rather specialized and highly technical. We are happy to consult with you, should this be an issue that you need to consider.
- In a Telecom Mobile Edge Computing (MEC) environment, the wireless devices would usually be subproblems, with an RMP associated with each cell tower. Each such RMP becomes a subproblem in a larger-scale (e.g. city-wide) solution. In this type of architecture, you can primarily scale the PlasticFog solution horizontally. For example, each cell tower subproblem might have linking constraints with nearby cell tower subproblems, which manage the handing off of the lower-level mobile device subproblems from one cell tower to another. In this way, large-scale optimization problems can be solved across entire regions, with minimal network traffic and with minimal invasion of privacy.