Working under the direction of the Manager or Senior Team Lead,
will be responsible for designing, developing, and testing
sophisticated software; researching, designing, developing and
testing software for data access, Big Data, and cloud computing
frameworks; building software tools to help engineers and
scientists import and export data - including Big Data - into the
MATLAB technical computing environment; participating in all phases
of the design, development, and testing of I/O capabilities
including scaling up to Big Data and cloud computing environments;
working in an Agile environment and contributing to the entire
development process including planning and prioritization,
gathering user requirements, writing functional specifications,
design reviews, implementation, and testing; and working closely
with other developers, quality engineers, usability specialists,
and writers, as well as collaborating with downstream teams to help
leverage features and infrastructure.
Education and Experience:
Masters degree or higher (or foreign education equivalent) in
Engineering or Computer Science and no experience.
Bachelors degree (or foreign education equivalent) in
Engineering or Computer Science and five (5) years of experience in
job offered or five (5) years of experience developing code for
data import and export.
Demonstrated expertise in object-oriented design and analysis
using C++ and MATLAB; developing code using C/C++ third party
libraries, MEX files, and multi-threaded programming; and in
architecture and design -- requirements gathering through
functional design and testing -- of software for data import/export
functionality in MATLAB within a multi-platform environment --
Windows, Mac, and Linux.
Demonstrated expertise designing algorithms in C++ and MATLAB to
read Big Data in partitions of size that can fit into memory for
processing in MATLAB; writing algorithms for partitioning data
using Big Data file formats; and developing high-performance
algorithms to address memory requirements and scale to
computational cluster installations for Big Data.
Demonstrated expertise in system analysis or technical support,
including engaging with end users for business and technical
requirements, user acceptance testing and feedback.
Demonstrated expertise designing cloud infrastructure for data
collection and analysis within an Amazon Web Services environment
using Amazon EC2 and S3; and performing Big Data analytics using
parallel computing techniques within an Apache/Hadoop system.
[Expertise may be gained during Graduate program.]
For the position listed above, interested candidates may search
by job code 23853 for specific job details and requirements and
apply online on the Careers Page at