Thursday, December 31, 2015

What is Bus Topology?

Bus topologyAlternatively referred to as a line topology, a bus topology is a network setup in which each computer and network device are connected to a single cable or backbone. The following sections contain both the advantages and disadvantages of using a bus topology with your devices.

Advantages of bus topology

  • It works well when you have a small network.
  • Easiest network topology for connecting computers or peripherals in a linear fashion.
  • Requires less cable length than a star topology.

Disadvantages of bus topology

  • Difficult to identify the problems if the whole network goes down.
  • It can be hard to troubleshoot individual device issues.
  • Not great for large networks.
  • Terminators are required for both ends of the main cable.
  • Additional devices slow the network down.
  • If a main cable is damaged, the network fails or splits into two.

Related pages

What is Tree Topology?
What is Star Topology?
What is Mesh Topology?
What is Tree Topology?

http://www.computerhope.com/jargon/b/bustopol.htm

Displaying all active Internet connections in Linux


Linux
It may be necessary to display what Internet connections are active on your Linux box.
For example, seeing if the Apache service is actively running, and if running what network ports it's listening to can be done with the below command:

#netstat -ntap
If you have root privileges running the above command gives an output similar to the example below.

Example output

Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:2208 0.0.0.0:* LISTEN 16271/hpiod
tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN 4917/mysqld
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 8030/apache2
tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN 15978/cupsd
tcp 0 0 127.0.0.1:2207 0.0.0.0:* LISTEN 16276/python
tcp6 0 0 :::5900 :::* LISTEN 5752/vino-server
tcp6 0 0 :::22 :::* LISTEN 5062/sshd
tcp6 0 148 ::ffff:192.168.2.102:22 ::ffff:192.168.2.1:3027 ESTABLISHED7534/sshd: hope [
tcp6 0 797 ::ffff:192.168.2.1:5900 ::ffff:192.168.2.1:2592 ESTABLISHED5752/vino-server

Additional information

Tuesday, December 29, 2015

Backup and Restore MySQL Database Using mysqldump

mysqldump is an effective tool to backup MySQL database. It creates a *.sql file with DROP table, CREATE table and INSERT into sql-statements of the source database. To restore the database,  execute the *.sql file on destination database.  For MyISAM, use mysqlhotcopy method that we explained earlier, as it is faster for MyISAM tables.

Using mysqldump, you can backup a local database and restore it on a remote database at the same time, using a single command. In this article, let us review several practical examples on how to use mysqldump to backup and restore.

For the impatient, here is the quick snippet of how backup and restore MySQL database using mysqldump:
backup: # mysqldump -u root -p[root_password] [database_name] > dumpfilename.sql

restore:# mysql -u root -p[root_password] [database_name] < dumpfilename.sql

How To Backup MySQL database


1. Backup a single database:

This example takes a backup of sugarcrm database and dumps the output to sugarcrm.sql
# mysqldump -u root -ptmppassword sugarcrm > sugarcrm.sql

# mysqldump -u root -p[root_password] [database_name] > dumpfilename.sql
The sugarcrm.sql will contain drop table, create table and insert command for all the tables in the sugarcrm database. Following is a partial output of sugarcrm.sql, showing the dump information of accounts_contacts table:
--
-- Table structure for table `accounts_contacts`
--

DROP TABLE IF EXISTS `accounts_contacts`;
SET @saved_cs_client     = @@character_set_client;
SET character_set_client = utf8;
CREATE TABLE `accounts_contacts` (
`id` varchar(36) NOT NULL,
`contact_id` varchar(36) default NULL,
`account_id` varchar(36) default NULL,
`date_modified` datetime default NULL,
`deleted` tinyint(1) NOT NULL default '0',
PRIMARY KEY  (`id`),
KEY `idx_account_contact` (`account_id`,`contact_id`),
KEY `idx_contid_del_accid` (`contact_id`,`deleted`,`account_id`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8;
SET character_set_client = @saved_cs_client;

--
-- Dumping data for table `accounts_contacts`
--

LOCK TABLES `accounts_contacts` WRITE;
/*!40000 ALTER TABLE `accounts_contacts` DISABLE KEYS */;
INSERT INTO `accounts_contacts` VALUES ('6ff90374-26d1-5fd8-b844-4873b2e42091',
'11ba0239-c7cf-e87e-e266-4873b218a3f9','503a06a8-0650-6fdd-22ae-4873b245ae53',
'2008-07-23 05:24:30',1),
('83126e77-eeda-f335-dc1b-4873bc805541','7c525b1c-8a11-d803-94a5-4873bc4ff7d2',
'80a6add6-81ed-0266-6db5-4873bc54bfb5','2008-07-23 05:24:30',1),
('4e800b97-c09f-7896-d3d7-48751d81d5ee','f241c222-b91a-d7a9-f355-48751d6bc0f9',
'27060688-1f44-9f10-bdc4-48751db40009','2008-07-23 05:24:30',1),
('c94917ea-3664-8430-e003-487be0817f41','c564b7f3-2923-30b5-4861-487be0f70cb3',
'c71eff65-b76b-cbb0-d31a-487be06e4e0b','2008-07-23 05:24:30',1),
('7dab11e1-64d3-ea6a-c62c-487ce17e4e41','79d6f6e5-50e5-9b2b-034b-487ce1dae5af',
'7b886f23-571b-595b-19dd-487ce1eee867','2008-07-23 05:24:30',1);
/*!40000 ALTER TABLE `accounts_contacts` ENABLE KEYS */;
UNLOCK TABLES;

2. Backup multiple databases:

If you want to backup multiple databases, first identify the databases that you want to backup using the show databases as shown below:
# mysql -u root -ptmppassword

mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| bugs               |
| mysql              |
| sugarcr            |
+--------------------+
4 rows in set (0.00 sec)
For example, if you want to take backup of both sugarcrm and bugs database, execute the mysqldump as shown below:
# mysqldump -u root -ptmppassword --databases bugs sugarcrm > bugs_sugarcrm.sql
Verify the bugs_sugarcrm.sql dumpfile contains both the database backup.
# grep -i "Current database:" /tmp/bugs_sugarcrm.sql
-- Current Database: `mysql`
-- Current Database: `sugarcrm`

3. Backup all the databases:

The following example takes a backup of  all the database of the MySQL instance.
# mysqldump -u root -ptmppassword --all-databases > /tmp/all-database.sql

4. Backup a specific table:

In this example, we backup only the accounts_contacts table from sugarcrm database.
# mysqldump -u root -ptmppassword sugarcrm accounts_contacts \
      > /tmp/sugarcrm_accounts_contacts.sql

4. Different mysqldump group options:

  • –opt is a group option, which is same as –add-drop-table, –add-locks, –create-options, –quick, –extended-insert, –lock-tables, –set-charset, and –disable-keys. opt is enabled by default, disable with –skip-opt.
  • –compact is a group option, which gives less verbose output (useful for debugging). Disables structure comments and header/footer constructs. Enables options –skip-add-drop-table –no-set-names –skip-disable-keys –skip-add-locks

How To Restore MySQL database


1. Restore a database

In this example, to restore the sugarcrm database, execute mysql with < as shown below. When you are restoring the dumpfilename.sql on a remote database, make sure to create the sugarcrm database before you can perform the restore.
# mysql -u root -ptmppassword

mysql> create database sugarcrm;
Query OK, 1 row affected (0.02 sec)

# mysql -u root -ptmppassword sugarcrm < /tmp/sugarcrm.sql

# mysql -u root -p[root_password] [database_name] < dumpfilename.sql

2. Backup a local database and restore to remote server using single command:

This is a sleek option, if you want to keep a read-only database on the remote-server, which is a copy of the master database on local-server. The example below will backup the sugarcrm database on the local-server and restore it as sugarcrm1 database on the remote-server. Please note that you should first create the sugarcrm1 database on the remote-server before executing the following command.
[local-server]# mysqldump -u root -ptmppassword sugarcrm | mysql \
                 -u root -ptmppassword --host=remote-server -C sugarcrm1
[Note: There are two -- (hyphen) in front of host]

Wednesday, December 23, 2015

Free IT ebooks downlaod

Free IT ebooks downlaod:

http://www.it-ebooks.org/

 

A good source of free ebooks downlaod for IT professionals!



Introduction to Microservices


Microservices are currently getting a lot of attention: articles, blogs, discussions on social media, and conference presentations. They are rapidly heading towards the peak of inflated expectations on the Gartner Hype cycle. At the same time, there are skeptics in the software community who dismiss microservices as nothing new. Naysayers claim that the idea is just a rebranding of SOA. However, despite both the hype and the skepticism, the Microservices Architecture pattern has significant benefits – especially when it comes to enabling the agile development and delivery of complex enterprise applications.
This blog post is the first in a 7-part series about designing, building, and deploying microservices. You will learn about the approach and how it compares to the more traditional Monolithic Architecture pattern. This series will describe the various elements of a microservices architecture. You will learn about the benefits and drawbacks of the Microservices Architecture pattern, whether it makes sense for your project, and how to apply it.

[Editor’s note – The other articles currently available in this seven-part series are:
Let’s first look at why you should consider using microservices.

Building Monolithic Applications

Let’s imagine that you were starting to build a brand new taxi-hailing application intended to compete with Uber and Hailo. After some preliminary meetings and requirements gathering, you would create a new project either manually or by using a generator that comes with Rails, Spring Boot, Play, or Maven. This new application would have a modular hexagonal architecture, like in the following diagram:
Graph-01
At the core of the application is the business logic, which is implemented by modules that define services, domain objects, and events. Surrounding the core are adapters that interface with the external world. Examples of adapters include database access components, messaging components that produce and consume messages, and web components that either expose APIs or implement a UI.
Despite having a logically modular architecture, the application is packaged and deployed as a monolith. The actual format depends on the application’s language and framework. For example, many Java applications are packaged as WAR files and deployed on application servers such as Tomcat or Jetty. Other Java applications are packaged as self-contained executable JARs. Similarly, Rails and Node.js applications are packaged as a directory hierarchy.
Applications written in this style are extremely common. They are simple to develop since our IDEs and other tools are focused on building a single application. These kinds of applications are also simple to test. You can implement end-to-end testing by simply launching the application and testing the UI with Selenium. Monolithic applications are also simple to deploy. You just have to copy the packaged application to a server. You can also scale the application by running multiple copies behind a load balancer. In the early stages of the project it works well.

Marching Towards Monolithic Hell

Unfortunately, this simple approach has a huge limitation. Successful applications have a habit of growing over time and eventually becoming huge. During each sprint, your development team implements a few more stories, which, of course, means adding many lines of code. After a few years, your small, simple application will have grown into a monstrous monolith. To give an extreme example, I recently spoke to a developer who was writing a tool to analyze the dependencies between the thousands of JARs in their multi-million line of code (LOC) application. I’m sure it took the concerted effort of a large number of developers over many years to create such a beast.
Once your application has become a large, complex monolith, your development organization is probably in a world of pain. Any attempts at agile development and delivery will flounder. One major problem is that the application is overwhelmingly complex. It’s simply too large for any single developer to fully understand. As a result, fixing bugs and implementing new features correctly becomes difficult and time consuming. What’s more, this tends to be a downwards spiral. If the codebase is difficult to understand, then changes won’t be made correctly. You will end up with a monstrous, incomprehensible big ball of mud.
The sheer size of the application will also slow down development. The larger the application, the longer the start-up time is. For example, in a recent survey some developers reported start-up times as long as 12 minutes. I’ve also heard anecdotes of applications taking as long as 40 minutes to start up. If developers regularly have to restart the application server, then a large part of their day will be spent waiting around and their productivity will suffer.
Another problem with a large, complex monolithic application is that it is an obstacle to continuous deployment. Today, the state of the art for SaaS applications is to push changes into production many times a day. This is extremely difficult to do with a complex monolith since you must redeploy the entire application in order to update any one part of it. The lengthy start-up times that I mentioned earlier won’t help either. Also, since the impact of a change is usually not very well understood, it is likely that you have to do extensive manual testing. Consequently, continuous deployment is next to impossible to do.
Monolithic applications can also be difficult to scale when different modules have conflicting resource requirements. For example, one module might implement CPU-intensive image processing logic and would ideally be deployed in Amazon EC2 Compute Optimized instances. Another module might be an in-memory database and best suited for EC2 Memory-optimized instances. However, because these modules are deployed together you have to compromise on the choice of hardware.
Another problem with monolithic applications is reliability. Because all modules are running within the same process, a bug in any module, such as a memory leak, can potentially bring down the entire process. Moreover, since all instances of the application are identical, that bug will impact the availability of the entire application.
Last but not least, monolithic applications make it extremely difficult to adopt new frameworks and languages. For example, let’s imagine that you have 2 million lines of code written using the XYZ framework. It would be extremely expensive (in both time and cost) to rewrite the entire application to use the newer ABC framework, even if that framework was considerably better. As a result, there is a huge barrier to adopting new technologies. You are stuck with whatever technology choices you made at the start of the project.
To summarize: you have a successful business-critical application that has grown into a monstrous monolith that very few, if any, developers understand. It is written using obsolete, unproductive technology that makes hiring talented developers difficult. The application is difficult to scale and is unreliable. As a result, agile development and delivery of applications is impossible.
So what can you do about it?

Microservices – Tackling the Complexity

Many organizations, such as Amazon, eBay, and Netflix, have solved this problem by adopting what is now known as the Microservices Architecture pattern. Instead of building a single monstrous, monolithic application, the idea is to split your application into set of smaller, interconnected services.
A service typically implements a set of distinct features or functionality, such as order management, customer management, etc. Each microservice is a mini-application that has its own hexagonal architecture consisting of business logic along with various adapters. Some microservices would expose an API that’s consumed by other microservices or by the application’s clients. Other microservices might implement a web UI. At runtime, each instance is often a cloud VM or a Docker container.
For example, a possible decomposition of the system described earlier is shown in the following diagram:
Graph-03
Each functional area of the application is now implemented by its own microservice. Moreover, the web application is split into a set of simpler web applications (such as one for passengers and one for drivers in our taxi-hailing example). This makes it easier to deploy distinct experiences for specific users, devices, or specialized use cases.
Each back-end service exposes a REST API and most services consume APIs provided by other services. For example, Driver Management uses the Notification server to tell an available driver about a potential trip. The UI services invoke the other services in order to render web pages. Services might also use asynchronous, message-based communication. Inter-service communication will be covered in more detail later in this series.
Some REST APIs are also exposed to the mobile apps used by the drivers and passengers. The apps don’t, however, have direct access to the back-end services. Instead, communication is mediated by an intermediary known as an API Gateway. The API Gateway is responsible for tasks such as load balancing, caching, access control, API metering, and monitoring, and can be implemented effectively using NGINX. Later articles in the series will cover the API Gateway.
Graph-05
The Microservices Architecture pattern corresponds to the Y-axis scaling of the Scale Cube, which is a 3D model of scalability from the excellent book The Art of Scalability. The other two scaling axes are X-axis scaling, which consists of running multiple identical copies of the application behind a load balancer, and Z-axis scaling (or data partitioning), where an attribute of the request (for example, the primary key of a row or identity of a customer) is used to route the request to a particular server.
Applications typically use the three types of scaling together. Y-axis scaling decomposes the application into microservices as shown above in the first figure in this section. At runtime, X-axis scaling runs multiple instances of each service behind a load balancer for throughput and availability. Some applications might also use Z-axis scaling to partition the services. The following diagram shows how the Trip Management service might be deployed with Docker running on Amazon EC2.
Graph-02
At runtime, the Trip Management service consists of multiple service instances. Each service instance is a Docker container. In order to be highly available, the containers are running on multiple Cloud VMs. In front of the service instances is a load balancer such as NGINX that distributes requests across the instances. The load balancer might also handle other concerns such as caching, access control, API metering, and monitoring.
The Microservices Architecture pattern significantly impacts the relationship between the application and the database. Rather than sharing a single database schema with other services, each service has its own database schema. On the one hand, this approach is at odds with the idea of an enterprise-wide data model. Also, it often results in duplication of some data. However, having a database schema per service is essential if you want to benefit from microservices, because it ensures loose coupling. The following diagram shows the database architecture for the example application.
Graph-04
Each of the services has its own database. Moreover, a service can use a type of database that is best suited to its needs, the so-called polyglot persistence architecture. For example, Driver Management, which finds drivers close to a potential passenger, must use a database that supports efficient geo-queries.
On the surface, the Microservices Architecture pattern is similar to SOA. With both approaches, the architecture consists of a set of services. However, one way to think about the Microservices Architecture pattern is that it’s SOA without the commercialization and perceived baggage of web service specifications (WS-*) and an Enterprise Service Bus (ESB). Microservice-based applications favor simpler, lightweight protocols such as REST, rather than WS-*. They also very much avoid using ESBs and instead implement ESB-like functionality in the microservices themselves. The Microservices Architecture pattern also rejects other parts of SOA, such as the concept of a canonical schema.

The Benefits of Microservices

The Microservices Architecture pattern has a number of important benefits. First, it tackles the problem of complexity. It decomposes what would otherwise be a monstrous monolithic application into a set of services. While the total amount of functionality is unchanged, the application has been broken up into manageable chunks or services. Each service has a well-defined boundary in the form of an RPC- or message-driven API. The Microservices Architecture pattern enforces a level of modularity that in practice is extremely difficult to achieve with a monolithic code base. Consequently, individual services are much faster to develop, and much easier to understand and maintain.
Second, this architecture enables each service to be developed independently by a team that is focused on that service. The developers are free to choose whatever technologies make sense, provided that the service honors the API contract. Of course, most organizations would want to avoid complete anarchy and limit technology options. However, this freedom means that developers are no longer obligated to use the possibly obsolete technologies that existed at the start of a new project. When writing a new service, they have the option of using current technology. Moreover, since services are relatively small it becomes feasible to rewrite an old service using current technology.
Third, the Microservices Architecture pattern enables each microservice to be deployed independently. Developers never need to coordinate the deployment of changes that are local to their service. These kinds of changes can be deployed as soon as they have been tested. The UI team can, for example, perform A|B testing and rapidly iterate on UI changes. The Microservices Architecture pattern makes continuous deployment possible.
Finally, the Microservices Architecture pattern enables each service to be scaled independently. You can deploy just the number of instances of each service that satisfy its capacity and availability constraints. Moreover, you can use the hardware that best matches a service’s resource requirements. For example, you can deploy a CPU-intensive image processing service on EC2 Compute Optimized instances and deploy an in-memory database service on EC2 Memory-optimized instances.

The Drawbacks of Microservices

As Fred Brooks wrote almost 30 years ago, there are no silver bullets. Like every other technology, the Microservices architecture has drawbacks. One drawback is the name itself. The term microservice places excessive emphasis on service size. In fact, there are some developers who advocate for building extremely fine-grained 10-100 LOC services. While small services are preferable, it’s important to remember that they are a means to an end and not the primary goal. The goal of microservices is to sufficiently decompose the application in order to facilitate agile application development and deployment.
Another major drawback of microservices is the complexity that arises from the fact that a microservices application is a distributed system. Developers need to choose and implement an inter-process communication mechanism based on either messaging or RPC. Moreover, they must also write code to handle partial failure since the destination of a request might be slow or unavailable. While none of this is rocket science, it’s much more complex than in a monolithic application where modules invoke one another via language-level method/procedure calls.
Another challenge with microservices is the partitioned database architecture. Business transactions that update multiple business entities are fairly common. These kinds of transactions are trivial to implement in a monolithic application because there is a single database. In a microservices-based application, however, you need to update multiple databases owned by different services. Using distributed transactions is usually not an option, and not only because of the CAP theorem. They simply are not supported by many of today’s highly scalable NoSQL databases and messaging brokers. You end up having to use an eventual consistency based approach, which is more challenging for developers.
Testing a microservices application is also much more complex. For example, with a modern framework such as Spring Boot it is trivial to write a test class that starts up a monolithic web application and tests its REST API. In contrast, a similar test class for a service would need to launch that service and any services that it depends upon (or at least configure stubs for those services). Once again, this is not rocket science but it’s important to not underestimate the complexity of doing this.
Another major challenge with the Microservices Architecture pattern is implementing changes that span multiple services. For example, let’s imagine that you are implementing a story that requires changes to services A, B, and C, where A depends upon B and B depends upon C. In a monolithic application you could simply change the corresponding modules, integrate the changes, and deploy them in one go. In contrast, in a Microservices Architecture pattern you need to carefully plan and coordinate the rollout of changes to each of the services. For example, you would need to update service C, followed by service B, and then finally service A. Fortunately, most changes typically impact only one service and multi-service changes that require coordination are relatively rare.
Deploying a microservices-based application is also much more complex. A monolithic application is simply deployed on a set of identical servers behind a traditional load balancer. Each application instance is configured with the locations (host and ports) of infrastructure services such as the database and a message broker. In contrast, a microservice application typically consists of a large number of services. For example, Hailo has 160 different services and Netflix has over 600 according to Adrian Cockcroft. Each service will have multiple runtime instances. That’s many more moving parts that need to be configured, deployed, scaled, and monitored. In addition, you will also need to implement a service discovery mechanism (discussed in a later post) that enables a service to discover the locations (hosts and ports) of any other services it needs to communicate with. Traditional trouble ticket-based and manual approaches to operations cannot scale to this level of complexity. Consequently, successfully deploying a microservices application requires greater control of deployment methods by developers, and a high level of automation.
One approach to automation is to use an off-the-shelf PaaS such as Cloud Foundry. A PaaS provides developers with an easy way to deploy and manage their microservices. It insulates them from concerns such as procuring and configuring IT resources. At the same time, the systems and network professionals who configure the PaaS can ensure compliance with best practices and with company policies. Another way to automate the deployment of microservices is to develop what is essentially your own PaaS. One typical starting point is to use a clustering solution, such as Mesos or Kubernetes in conjunction with a technology such as Docker. Later in this series we will look at how software-based application delivery approaches like NGINX, which easily handles caching, access control, API metering, and monitoring at the microservice level, can help solve this problem.

Summary

Building complex applications is inherently difficult. A Monolithic architecture only makes sense for simple, lightweight applications. You will end up in a world of pain if you use it for complex applications. The Microservices architecture pattern is the better choice for complex, evolving applications despite the drawbacks and implementation challenges.
In later blog posts, I’ll dive into the details of various aspects of the Microservices Architecture pattern and discuss topics such as service discovery, service deployment options, and strategies for refactoring a monolithic application into services.
Stay tuned…
[Editor’s note – The other articles currently available in this seven-part series are:
Guest blogger Chris Richardson is the founder of the original CloudFoundry.com, an early Java PaaS (Platform as a Service) for Amazon EC2. He now consults with organizations to improve how they develop and deploy applications. He also blogs regularly about microservices at http://microservices.io.

How to hack any Linux machine just using backspace: flaw in GRUB

A rather embarrassing bug has been discovered which allows anyone to break into a Linux machine with ease.
If you press the backspace key 28 times on a locked-down Linux machine you want to access, a Grub2 bootloader flaw will allow you to break through password protection and wreck havoc in the system.
Researchers Hector Marco and Ismael Ripoll from the Cybersecurity Group at Universitat Politècnica de València recently discovered the vulnerability within GRUB, the bootloader used by most Linux distros.
As reported by PC World, the bootloader is used to initialize a Linux system at start and uses a password management system to protect boot entries -- which not only prevents tampering but also can be used to disable peripheries such as CD-ROMs and USB ports.
Without GRUB password protection, an attacker could also boot a system from a live USB key, switching the operating system in order to access files stored on the machine's hard drives.
The researchers discovered the flaw within GRUB2, of which versions 1.98 to 2.02 are affected. These versions were released between 2009 and today, which makes the vulnerability a long-standing and serious problem.
In a security advisory, Marco and Ripoli said the bootloader is used by most Linux distributions, resulting in an "incalculable number of affected devices."
Exploiting the flaw -- and checking if you are vulnerable -- is simple. When the bootloader asks for a username, simply press the backspace button 28 times. If vulnerable, the machine will reboot or you will encounter a Grub rescue shell.
The shell grants a user a full set of admin privileges -- within the rescue function only -- to load customised kernels and operating systems, install rootkits, download the full disc or destroy all data on a machine.
The researchers say the fault lies within two functions; the grub_password_get() function and the andgrub_password_get() script which suffer integer overflow problems. Exploiting the flaw causes out of bounds overwrite memory errors. When a user presses backspace, the bootloader is erasing characters which do not exist -- damaging its memory enough to trigger an exception in authentication protocols.
Not only does the vulnerability give attackers the chance to steal data and tamper with peripherals and passwords, but Linux entries can be modified to deploy malware.
While there is an emergency patch available on Github for Linux users, the main vendors have been made aware of this security flaw. It is recommended that users update their machines as soon as patches have been deployed, but it is worth noting an attacker needs physical access to the machine to exploit the flaw.

Monday, December 21, 2015

Can IBM Sell Mainframe As Cloud Rock Star?



On the mainframe's 50th birthday, IBM positions it as the still young Enterprise Cloud System, capable of running 6,000 Linux workloads.

An IBM mainframe can be subdivided to run Linux virtual machines, in something of the same manner that Amazon Web Services or the Rackspace cloud runs virtual machines. But unlike an AWS host, a single mainframe runs 6,000 VMs at a time.
Likewise, the old mainframe Customer Information Control System, better known as CICS, can process 1.1 million transactions per second. It was the predecessor to client/server architectures and connected thousands of users to a single mainframe. That number, by the way, is more transactions than are conducted by the Google search engine per second, according to a spokesman for IBM partner GT Software in Atlanta.
On the mainframe's 50th anniversary this week, IBM buffed up and displayed it as one of the few systems that can scale to cloud-like proportions. It can still keep up with the x86 server clusters that power many of the Web-scale companies that have emerged in the last 10 years. IBM claimed the mainframe is a suitable platform on which to run the enterprise private cloud, and says public cloud service providers can adopt it for Linux workloads as well. The mainframe, as IBM would have it, is looking young again.
IBM marked the mainframe's 50th birthday this week.
IBM marked the mainframe's 50th birthday this week.
Cloud architectures often look like an attempt to solve with commodity parts the same problems that were answered by the first mainframe's design -- general-purpose operations combined with a large degree of scalability. The mainframe brought concentrated CPUs and memory together, then added access to communications and storage via nearby, high-speed channels. It was an early model of intense computation. And it's enabled IBM, for many of the past 50 years, to keep mainframe system prices high. In the cloud, in contrast, low-cost personal computer processors are packaged with nearby disks and switches on a server rack to produce some of the same attributes. Google, Amazon Web Services, and eBay have all been built on those commodity parts.
[Want to learn more about IBM's $1.2 billion investment in cloud infrastructure? See IBM Preps SoftLayer Cloud Datacenter In Hong Kong.]
Of late, IBM has been lowering mainframe prices so that it can continue to play a role in the cloud. On Tuesday, it took steps to make the mainframe a more attractive host for cloud service suppliers. It announced the first cloud-oriented, System z-based offering, the IBM Enterprise Cloud System. The Cloud System is based on a zBC12 or zEC12 mainframe. As it was announced last July, the zBC12 retailed for about $75,000. In comparison, 2003's mainframe model, the z990 T-Rex, retailed for $1 million.
In its announcement, IBM said the mainframe is now a cheaper environment than x86 servers on which to run Linux virtual machines. Unlike the mainframes that run IBM's proprietary operating systems, the Enterprise Cloud System is geared to run Linux and Linux virtual machines. "Thanks to higher system efficiency and greater scalability, the total cost of some Linux on System z cloud deployments can be up to 55% less than comparable x86-based cloud infrastructure," IBM claims.
The cloud mainframes are equipped with special power processors geared to running Linux virtual machines, called Integrated Facility for Linux (IFL). Each IFL can host 60 virtual machines, and a zEC12 is capable of mounting 100 IFLs. Hence, IBM comes up with a figure of 6,000 VMs for a single mainframe host.
An Enterprise Cloud System, in addition to a mainframe Linux server, includes IBM v7000 or DS8000 storage; IBM Wave z/VM (a graphical management interface for managing VMs); and IBM Cloud Management Suite. The last includes SmartCloud Orchestrator for configuring and deploying virtual machines; Omegamon XE for monitoring and managing performance of workloads; and Tivoli Storage Manager. The suite is the IBM software that provides the automated Linux VM spinup, deployment, and management.
Also on Tuesday, IBM introduced new consumption-based pricing models for managed service providers. If a service provider builds infrastructure based on mainframes, it can pay off the mainframe bill based on its use by customers, instead of an upfront payment.
IBM's example of a company that is doing so is Business Connexion, the largest enterprise service provider in Africa. The firm is packaging mainframes into "pop-up" datacenters that can be installed in a telco's remote office to provide Internet services to a previously unreached area. A mainframe in this setting uses about the same amount of electricity as a clothes dryer, IBM spokesmen said.
Private clouds are moving rapidly from concept to production. But some fears about expertise and integration still linger. Also in the Private Clouds Step Up issue of InformationWeek: The public cloud and the steam engine have more in common than you might think. (Free registration required.)

Thursday, December 10, 2015

8 Things You Should Know About Storage Deployments With OpenStack


8 Things You Should Know About Storage Deployments With OpenStack

  1. OpenStack has sub-projects that deliver both block (Cinder) and object (Swift) storage. A variety of performance-focused primary storage and optimized secondary storage solutions are on the market, and they provide flexible, highly scalable storage services for OpenStack.
  2. Optimized secondary storage has clear value for object-based, large-scale storage, where spinning disk still maintains a $/GB advantage over flash and performance is not a significant concern.
  3. Cinder is a plug-in architecture. You can use your own vendor’s backend(s) or use the default LVM. Cinder aims to virtualize various block storage devices and abstract them into an easy, self-serve offering to allow end users to allocate and deploy storage resources on their own quickly and efficiently.
  4. Swift’s ability to provide scale-out storage on commodity hardware may make it a more attractive option to external storage, such as a SAN, for use cases where $/GB for flash can’t beat that for spinning disk AND performance isn’t a huge concern.
  5. The role of the Cinder Project is evolving very quickly and in many ways is quickly maturing through community contributions.
  6. Any discussion about distributed storage solutions for cloud should include commercial options alongside open source ones. In the case of cloud storage for performance-sensitive applications, the options provided by open source as well as legacy storage vendors are significantly lacking.
  7. APIs are an often overlooked component of block storage in cloud environments. A robust API that lends itself to automating all aspects of the storage system is imperative to achieve the promised operational benefits of cloud. But having APIs alone isn’t enough. Are they robust and complete? Can your chosen storage solution withstand all the API calls you’re going to make?
  8. Do-it-yourself storage solutions can save a lot of money, but don’t forget the hidden costs of time. If no services are available to help you install and configure the system, deployment can be slow and complicated. If something goes wrong, who provides support?
This is just the beginning. Read the article for a fuller picture of what’s on your decision horizon. If we missed something, let us know in the comments — and ask questions. Our (world’s best) OpenStack storage deployment experts are standing by to answer them.
osdeploymentdoneright-blockvsobject.png
Download OpenStack Deployments Done Right: Block Storage vs. Object Storage to help you choose the right storage for your OpenStack cloud.

Wednesday, December 9, 2015

Open Access Online Journal: CloudBook.net

http://www.cloudbook.net/journal/cloudbook-journal-3-2-2012


Cloudbook Journal Volume 3 Issue 2

Cloudbook Journal Volume 3 Issue 2

Highly Available in the Cloud
In this issue of the Cloudbook Journal, we share stories such as:
* Delivering highly available shadow sites in the cloud
* Cloud analytics on AWS
* In-Person event planning in the cloud
* Unlocking the commercial potential in your cloud software
* Velocity marketing
* How to do cloud marketing in a recession
Feedback welcomed at magazine@cloudbook.net         



Newvem Cloud Analytics Engine
by Whitney Mountain
How Newvem, through it's analytics engine for Amazon Web Services users, and coupled with its KnowYourCloud forum, is helping users make the most of the new IT world we live in.
read the full story >>
How to Do Cloud Marketing in a Recession
How to Do Cloud Marketing in a Recession
by Tanmay Deshpande
In today’s situation where the world is already suffering from a big economic crisis, most companies want do reduce their expenses; they are doing so on IT and IT-related tasks. But at the same time, it is required to get those tasks done. This story takes a look at how to convince clients to opt for cloud computing in the face of difficult economic times.
read the full story >>
In-Person Event Planning in the Cloud
In-Person Event Planning in the Cloud
by Whitney Mountain
This story discusses how Event Day's technology is trying to make the in-person experience at an event more enjoyable by leveraging Azure's cloud.
read the full story >>
Hedge Funds Deploy in the Cloud
Hedge Funds Deploy in the Cloud
by Whitney Mountain
For many, hedge fund investing evokes images of ultra-sophisticated investors using cutting edge technology to execute sophisticated strategies in pursuit of superior returns. But for the world’s wealthiest people and institutions, the process of investing in those hedge funds is still handled in a very antiquated way. This story discusses how Hedge Fund Management is making it to the cloud.
read the full story >>
Delivering Highly Available Shadow Sites in the Cloud
Delivering Highly Available Shadow Sites in the Cloud
by Whitney Mountain
When it comes to cloud computing, security issues can scare many companies so much that they become afraid of their own shadows. But a company called Foresight is taking the shadows of their clients’ websites and making friends out of foes ...
read the full story >>
Unlocking the Commercial Potential in your Cloud Software
Unlocking the Commercial Potential in your Cloud Software
by Aidan Gallagher
For established software vendors, the cloud presents perhaps the greatest challenge they have yet faced. How do they reinvent their business for the new age? Afterall, they have existing technology, in the market, with paying customers, ongoing projects, legal obligations, and support & maintenance contracts. On the face of it, a shift to the cloud will cannibalize their business, erode their market share, and destroy their pipeline. And if the customer isn’t asking for it, why would you upset the applecart? This story discusses why ...
read the full story >>
Velocity Marketing
Velocity Marketing
by Ken Rutsky
Hidden behind the content marketing strategies, social media mysteries, and marketing automation dashboards and metrics lays a fundamental problem. In today’s hyper competitive, global, instantaneous market, where buyers and consumers have nearly unlimited access to information and each other, the fight for attention and share has become a treadmill of constantly increasing speed. With the proliferation of competitive solutions in even the most specialized market segment and today’s extremely well-educated and self-directed buyers, we need a new “Velocity Marketing” formula: one that delivers Breakthrough by combining Impact, engagement, and experience.
read the full story >>
Virtual Patching Secures Electronic Medical Records in a Private Cloud
Virtual Patching Secures Electronic Medical Records in a Private Cloud
by Whitney Mountain
In today’s social media age, personal information has gone public. While people can limit what they share online, many have taken to sharing details as major as having a baby and as minor as stubbing a toe. But even for the most rabid sharers, there is still information that is too private to distribute publicly across virtual networks. Medical records, for instance, make up one form of personal information that is best kept private. This story discusses how BIDPO keeps its client's medical records private.
read the full story >>

All you have ever wanted to know about Linux on the mainframe

Novell's Meike Chabowski says Linux enriches the mainframe's ecosystem and positions it to satisfy future requirements.
 
Commentary - Even after ten years of successful mainframe deployments, people and organizations not familiar with mainframe Linux often still see the operating system as suitable only for commodity hardware. While Linux indeed runs beautifully on x86 hardware, it is hardly limited to that platform. If you have ever wondered whether Linux can handle complex and scalable mainframe workloads, read on for the exciting truth about Linux/mainframe synergies.
Now that Linux on System z (the definitive term for mainframe Linux) is a reality, its impact – on server consolidation, data center economics, IT reliability and cloud computing – is large and increasing. Today, Linux on System z and the other z operating systems (z/OS, z/VM, z/VSE, z/TPF) are best friends and partners in the mainframe ecosystem.
How We got here: Linux on System z is not new
For those lacking scorecards, May 17, 2010 was the tenth anniversary of commercial/supported main-frame Linux. Its birth followed a late-1990s major shift in IBM thinking leading to acceptance of an open source operating system running on the company’s crown technology jewel, System/390. Fortunately, a skunkworks proof-of-concept project at IBM’s Boeblingen Lab demonstrated feasibility and viability of the unlikely marriage. IBM decided not to release its own mainframe Linux distribution but invited existing suppliers to join the party.
SuSE (later SUSE LINUX AG, acquired by Novell in 2004) quickly offered its prototype implementation, in exchange for access to detailed architectural information supporting ongoing development. Using a system created at Marist College from IBM patches running on a borrowed Multiprise 3000 mini-mainframe, engineers used the SuSE AutoBuild tool to create more than 400 software packages in the first weekend. They adapted YaST (the installation, configuration and systems management tool inte-grated with SUSE Linux Enterprise) and edited package selection for mainframe distribution.
But building, installing and running Linux on the mainframe did not ensure its acceptance. Starting with System/360 in the mid-1960s, IBM had positioned its flagship computing platform for stability, reliability and backwards compatibility. To match that culture, the normally dynamic Linux code base was frozen, maintained to ensure ongoing hardware and software compatibility and positioned with subscription-based services. Customers bought into – and purchased – the concept and, with that, the first enterprise-ready and fully supported Linux operating system for the mainframe was born. It has closely tracked mainframe hardware and software progress from S/390 through various generations (z900, z990, z9, z10) to today’s premier system zEnterprise. Along the way, IBM introduced the Integrated Facility for Linux (IFL), a special-purpose System z central processor exclusively for Linux workloads. IFL runs at full System z speed, does not incur IBM software charges for traditional System z operating systems and middleware and upgrades at no cost to new technology generations.
Over the years, as underlying system architecture evolved, SUSE Linux Enterprise Server exploited complementary features, such as the File Hierarchy Standard to accommodate 64-bit adaptations, larger address spaces and coexistence of 32/64-bit applications within Linux instances. It has also been the first distro to support new machine instructions as mainframe microarchitecture advanced. Recognizing mainframe sites’ need for ready-to-run software installations, the Starter System for SUSE Linux Enterprise Server for System z – a complete pre-built installation server to streamline and simplify provisioning virtual Linux servers – has also been made available.
Linux is mature and ready for the big time
Linux is reliable, secure and efficient. In fact, its unofficial middle name could be “mission critical” consi-dering its combined heritage of enterprise-proven, industrial-strength mainframe security and thou-sands of widely used industry-standard Linux applications. A valuable third component is a robust worldwide Linux on System z community providing mutual support and skilled staffers of all levels.
This should come as no surprise. Mainframe Linux represents natural evolution of the system’s long and productive use. The beauty of that growth is that “Linux is Linux.” Staffers familiar with Linux on smaller platforms will comfortably use and support it on big iron and applications/services hosted elsewhere can generally be consolidated and scaled up to exploit mainframe reliability, availability and serviceability.
Installation and management tools galore
System z virtualization – extending four decades of development and productive use – allows ultra-flexible workload and resource partitioning by hardware (LPARs, logical partitions) and software (z/VM). These technologies allow extreme sharing of system resources, provisioning resources where and when they’re needed, over-committing real resources, handling demand spikes and virtualizing resources not present in real hardware.
Whether or not the oft-cited mainframe skills shortage exists, IBM and the Linux community have created a library of cookbooks and tutorials providing resources, plans, checklists, procedures and tools for quick and reliable mainframe Linux implementation. Once installed, Linux servers and applications can be monitored, measured, tracked and tuned to meet specific organization and application performance requirements. Native mainframe management tools and Linux-specific software agents combine to present both system-wide dashboards and fine-grain application/server details.
Why mainframe vs. x86?
Underutilized resources are uneconomical and do not pay for themselves. x86 platform CPU use rates of 10 to 15 percent are commonly tolerated as unavoidable. Additionally, virtualization factors of 10x are considered an achievement in hardware economies. Yet generations of mainframes have consistently run productively at near-100 percent CPU usage while virtualizing servers by factors in the hundreds. System z hosting configurations dramatically reduce physical server footprint, power/cooling configurations and expenses, software costs for fewer cores/processors and staffing requirements.
IBM’s newest mainframe, the IBM zEnterprise System, more closely integrates differing system architectures than has previously been possible. This allows workloads on the new IBM zEnterprise 196 mainframe server as well as workloads on select IBM POWER7 and System x blades to share resources and be managed as a single, virtualized system. The IBM zEnterprise Blade Center Extension (zBX) and the IBM Unified Resource Manger creates multiple server images and allow efficient resource allocation and utilization. SUSE Linux Enterprise from Novell runs the first workload optimized offering for zBX, the Smart Analytics Optimizer, which speeds complex analytic workloads at a lower cost per transaction.
“The new IBM zEnterprise System represents a bold move to fundamentally change how data centers are managed,” said Carrol Stafford, vice president, IBM System z. “The new mainframe is not only the fastest enterprise server in the world, it also represents a giant leap forward in how to integrate and automate workloads across heterogeneous platforms with unrivaled reliability and security. SUSE Linux Enterprise Server plays a key role across this architecture in helping clients take advantage of this performance as they aim to increase data center efficiency and consolidate workloads. In addition to exploiting Linux on System z, customers can manage and integrate workloads on selected IBM Power and System x servers through zBX powered by zManager.”
Mainframes are affordable
Smart management has moved from simple price tag comparisons (TCA, total cost of acquisition) to strategic TCO (total cost of ownership) evaluations. Long-term mainframe system value means that for many workloads, a System z provides better return on investment. While Linux gained its reputation running cheaply on commodity hardware – and, in fact, it is a great choice on any hardware – running Linux on mainframe hardware economizes on energy, floor space, software and staffing.
To simplify acquiring and configuring mainframe Linux, Novell and IBM partnered to provide the Solu-tion Edition for Enterprise Linux for System z, which is complemented by the SUSE Linux Enterprise Consolidation Suite for System z. Subscription pricing includes:
• Reduced total cost of acquisition (TCA)
• Ability to consolidate .NET workloads on IBM mainframe and .NET development plug-ins into Microsoft’s Visual Studio to develop applications on x86 and test/deploy them on System z
• Linux training, including vouchers for extensive on-demand Data Center Training Library
The truth is out there
Businesses, government agencies and non-profit organizations must consider their unique workloads and applications to determine whether System z and Linux suit their IT needs. Good application candi-dates for Linux hosting include existing apps that are well instrumented, apps using mainframe-resident data (e.g., z/OS or z/VSE), apps coordinating processes running elsewhere and apps offloading time-consuming processing to IFL cycles.
Unfortunately, “myth-information” about Linux abounds; but it’s easy to rebut uninformed, often self-serving and, of course, contradictory objections to evaluation such as…
• Linux is too new to be trusted
• Linux is obsolete, replaced by newer and better technologies
• Linux requires unique/scarce/arcane/expensive skills
• Linux is unreliable
• Linux is unsupported
• Linux and z/OS are enemies
• Linux information is scarce and too hard to find
• Linux is hard to install
• Open source software – including Linux – is evil and un-American
• Open source software is dangerous because of its chaotic/informal development environment
In any field of endeavor – woodworking, cooking, gardening, or IT – success requires the right tools. IBM’s System z is uniquely suited for hosting today’s demanding hybrid multi-system applications, ser-vices and cloud computing. It is quite clear that increasingly, Linux enriches the mainframe’s ecosystem and positions it to satisfy future requirements.
Linux continues to give the mainframe the new blood it needs to stay alive in the 21st century.
 
SEATTLE --IBM and The Linux Foundation, the nonprofit organization dedicated to speeding up the growth of Linux and collaborative software, announced the Open Mainframe Project (OMP) at LinuxCon. IBM is betting big on enterprise Linux by unveiling what it calls most secure Linux servers in the industry.
mainframe.jpg
The mainframe is alive and well with Linux running through its circuits.
The founding Platinum members of OMP include ADP, CA Technologies, IBM and SUSE. This news comes as no surprise. IBM has powered its zSeries mainframe with Linux since the year 2000. Indeed, Linux is what has enabled the mainframe to continue to be a living force in computing long after its critics had written its obituary.
That's because together big iron and Linux excel at delivering the services enterprises need today. These include: Big Data, mobile processing, cloud computing and virtualization. To make sure Linux and the mainframe continue to thrive, vendors, users and academia needed a neutral forum to work together to advance Linux tools and technologies and increase enterprise innovation. That forum is the OMP.
The OMP members will focus on leveraging new Linux software and tools that can take advantage of the mainframe's speed, security, scalability and availability. The Project will seek to significantly broaden the set of tools and resources that are intended to drive development and collaboration of mainframe Linux. The OCP will also aim to coordinate mainframe improvements to upstream projects to increase the quality of these code submissions and ease upstream collaboration.
Specifically, IBM will enable programs such as Apache Spark, Node.js, MongoDB, MariaDB, PostgreSQL and Chef on z Systems to provide clients with open-source choice and flexibility for hybrid cloud deployments. IBM will also be contributing a great deal of formerly proprietary mainframe code to the open-source community.

In addition, SUSE, which is the top mainframe Linux distributor, will now support the KVM hypervisor. Michael Miller, SUSE vice president of global alliances and marketing, said "SUSE has been the No. 1 Linux on the mainframe for 15 years by working together with this ecosystem. The Open Mainframe Project provides an ideal environment to expand that collaboration in a way that increases choice and brings benefits to customers and developers alike."
"Fifteen years ago IBM surprised the industry by putting Linux on the mainframe, and today more than a third of IBM mainframe clients are running Linux," said Tom Rosamilia, IBM's senior of IBM Systems in a statement. "We are deepening our commitment to the open-source community by combining the best of the open world with the most advanced system in the world in order to help clients embrace new mobile and hybrid cloud workloads. Building on the success of Linux on the mainframe, we continue to push the limits beyond the capabilities of commodity servers that are not designed for security and performance at extreme scale."
Jim Zemlin, The Linux Foundation's executive director, added, "Linux today is the fastest growing operating system in the world. As mobile and cloud computing become globally pervasive, new levels of speed and efficiency are required in the enterprise and Linux on the mainframe is poised to deliver. The Open Mainframe Project will bring the best technology leaders together to work on Linux and advanced technologies from across the IT industry and academia to advance the most complex enterprise operations of our time."
Related Stories: