Senior Hadoop Architect - Contract

Location: Atlanta, GA
Date Posted: 08-01-2013
Position:  Hadoop System Architect

Duration:  Long Term Contract
Location: Anywhere, USA (75-100% travel required)
Description: We are seeking experienced Hadoop and Big Data Systems Architects to join our team. This key role has two major responsibilities: first to work directly with our customers and partners to optimize their plans and objectives for designing and deploying Apache Hadoop environments, and, secondly, to assist in building or designing reference configurations to enable our customers and influence our product.  Systems Architects will facilitate the communication flow between our project teams teams and the customer. For these strategically important roles, we are seeking outstanding talent to join our team.
         Participate in the pre and postsales process, helping both the sales and product teams to interpret customers’ requirements.
         Work directly with prospective customers’ technical resources to devise and recommend solutions based on the understood requirements
         Analyze complex distributed production deployments, and make recommendations to optimize performance
         Work closely with project teams at all levels to ensure rapid response to customer questions and project blockers
         Help develop reference Hadoop architectures and configurations
         Drive POCs with customers to successful completion
         Write and produce technical documentation, knowledgebase articles
         Attend speaking engagements when needed
         Travel up to 100%
         More than three years of Professional Services (customer facing) experience architecting large scale storage, data center and /or globally distributed solutions
         2+ years designing and deploying 3 tier architectures or largescale Hadoop solutions
         Ability to understand and translate customer requirements into technical requirements
         Strong experience implementing software and/or solutions in the enterprise Linux or Unix environment
         Ability to compile and install Linux applications from source, the Linux kernel and kernel modules
         Experience with integrating various solutions such as LDAP, or system / installation management tools into the overall solution
         Strong understanding of network configuration, devices, protocols, speeds and optimizations
         Python, Perl, or other scripting language required
         Familiarity with the Java ecosystem and enterprise offerings, including debugging and profiling tools (jconsole), logging and monitoring tools (log4j, JMX), and security offerings (Kerberos/SPNEGO).
         Significant previous work writing to network based APIs, preferably REST/JSON or XML/SOAP
         Solid background in Database administration or design Oracle RAC a plus Excellent verbal and written communications
         Experience in architecting data center solutions – properly selecting server and storage hardware based on performance, availability and ROI requirements
         Demonstrable experience using R and the algorithms provided by Mahout
Nice to have, but not required experience:
         Experience working with Apache Hadoop including: Knowledge on how to create and debug Hadoop jobs
         Ability to understand big data usecases, and recommend standard design patterns commonly used in Hadoop based deployments.
         Knowledge of the data management ecosystem including: Concepts of data warehousing, ETL, data integration, etc.
this job portal is powered by CATS