Friday, 24 March 2017

Best Hadoop training institutes in Hyderabad,

Hadoop course structor and eco systems by RStrainings
Now we are living in Big data world. Today, the amount of unstructured data getting added to warehouse from different sources got increased exponentially. So the challenge is how to get the business value or customer insights out of this huge raw data. One of the most popular technology that is aiming to solve this big data related analytics is HADOOP. Hadoop is an open-source framework,RStrainings is best training center in hyderabad for hadoop training which is written in Java on Linux Operating System, which is intended to resolve the Big Data related to issues in terms of Storage wise and Processing wise. Hadoop is developed on a few important ideas and It is very rich in its features. First, It uses commodity machines to store its raw data.Second,code-locality. Moving the code where data resides over the network from one machine to another machine.This process is more efficient and faster processing methodology to handle very large datasets.Third, fault tolerance by having more copies within the cluster for high data availability and handles the system failure, Rstrainings providing Hadooop training in classroom and online
Hadoop framework uses mainly its two core components to store and process the Big Data[large] datasets. One is HDFS, Hadoop distributed File System, and another one is MapReduce. 
Same as Linux, the HDFS will split/divide/partition the entire data into chunks of data(each chunk will be called as Block size in Hadoop) and distribute them across multiple servers within Hadoop Cluster. MapReduce is a programming language and which helps to process the large datasets stored in HDFS.RStrainings is providing every session practically and understandble examples..
The Hadoop cluster consists of two types of nodes[Individual machines]: Master Node and WorkerNode. Always MasterNodes manages something within cluster and Hadoop Cluster can have more than one MasterNode. NameNode is MasterNode which manages the entire MetaData of its cluster. So it is counter-piece/heart of Hadoop cluster. A WorkerNode(can be called Data Node) stores the actual file in the form of Blocks.Whenever client wants to read or write into HDFS, first it contacts a NameNode. So If NameNode crashes/doesn't work then the entire hadoop cluster becomes inaccessible.RStrainings providing training world wide like usa, india , Singpore,Malasia, UK, CANADA etc..
Different Hadoop Eco-Components:-
PIG:- Pig is a base platform,developed by yahoo in 2006,high level data-flow language, which provides alternative abstraction on top of MapReduce program. It uses its own scripting language called Pig Latin. The Pig framework translate the Pig Latin scripts into series of MapReduce programs. RStrainings was located in Madhapur pin code is 500081
Hive:-Hive is data warehouse infrastructure software. It is both, a storage component where It stores the underlined data in HDFS and also a processing Component where it can analyze the Big Data stored in HDFS. Hive provides a SQL-like query language called HQL(Hive Query Language(HiveQL)).
HBASE:-Using RDBMS doesn't scale well and its is hard to shard the data. Hbase is another hadoop distributed column oriented ecosystem. It is also a database which built on top of HDFS. HBASE does not use the MapReduce programming to process the Big data. Unlike MapReduce program,Hbase can access the data randomly where online meets low-latency.

Sunday, 8 January 2017

SAP fico over view

           sap fico
SAP stands for systems, applications, and products in data processing
                        The goal of sap is to give customers the ability to interact with a common corporate databases.there is a sap R/3 software which manages all the modules.
sap fico
Fico means fi-finance and co-controlling it helps in the development of a business  there are many financial sections that are included into sap they are
--> general ledger
--> account payable
--> account receivable
The benefits of this is when the products are sold and payments are made then entries are automatically updated to the system in real time
Co-controlling means it handles all the flow of costs and the transactions that take place in the company.
in this, there are various segments involved like
--> product costing
--> cost center accounting
--> profitability analysis
There is a large demand for sap in the market now-a-days
The growth for this position and for the development of this software is increasing yearly
There are many job opportunities in this there will be surely a great future with sap
The average payscale in India for a sap fico worker is 590,383 pa
In the USA starting salary is 50,000-70,000 USD pa
Sap fico project manager gets 1,40,000 to 1,80,000 USD per annum
==> Financial account basic settings
==> General ledger
==> Account payable
==> Account receivable
==> Asset accounting
==> Reports
==> Basic settings of controlling
==> Cost center accounting
==> Internal orders
==> Profit center accounting
==> Profitability analysis
This is the overview about sap fico the growth and demand for this is more in the industry it will be helpfull for all the professionals and also for the freshers