The FICO Data Privacy Policy explains FICO’s collection and use of cookies. Cookies help us remember your settings to provide you with a better browsing experience; allow us to assess, monitor, and improve the website’s performance; and enable our partners to advertise to you. You may disable the cookies by changing the settings in your browser, and you may tell us not to share your cookie data with third parties. By using this website, you consent to the use of cookies as described in the FICO Data Privacy Policy.
Request Trial
Leverage multiple data sources to rapidly improve analytics and decisions.
A data lake architecture can use richer and deeper data volumes to improve and deploy predictive analytics. The FICO Platform supports data lake architectures, making it easy to ingest and analyze data from a variety of sources. It operationalizes predictive models that utilize data in real time to automate and improve decisions.
The FICO® Data Lake and Analytics Technology Blueprint provides a complete end-to-end solution for self-service data access in a data lake architecture. The FICO solution includes an extensive array of tools for data ingestion, preparation, discovery, analytics and governance. It also provides a foundational layer for authentication, authorization, distributed execution services, storage access, system monitoring and high availability.
Download Blueprint
The data ingestion service provides connectivity to the various operation systems and ingests data. It is designed to transfer data between Hadoop and relational databases or mainframes. You can use this to import data from a relational database management system (RDBMS) such as MySQL or Oracle or a mainframe into the Hadoop Distributed File System (HDFS), and transform the data in Hadoop MapReduce. The process is automated, relying on the database to describe the schema for the data to be imported. It uses MapReduce to import and export the data, which provides parallel operation as well as fault tolerance.
The data wrangling user experience leverages the latest techniques in data visualization, machine learning and human-computer interaction to guide users through the process of exploring and preparing data. Interactive exploration presents automated visualizations of data based on its content in the most compelling profile. Predictive transformation converts every click or select within the solution into a prediction — the system intelligently assesses the data at hand to recommend a ranked list of suggested transformations for users to evaluate and edit. Intelligence and context are learned from data registered into the platform and how users interact with it. Common tasks are automated, and users are prompted with suggestions to speed their wrangling. The data wrangling solution can transform the data on-the-fly in the application or compile down to Spark, Google DataFlow or any in-memory engine. The platform natively supports all major Hadoop on-premises and cloud platforms. With this model, the solution can handle any scale.
Managing compliance mandates across the broad spectrum of data and data sources is an increasingly difficult and costly challenge facing IT today. New regulations, such as General Data Protection Regulation (GDPR) in Europe, create increasing difficulties in maintaining locality and protecting customer confidentiality and privacy. By focusing on and leveraging data lake capabilities to create a converged regulatory and risk data hub, businesses can more easily and more powerfully maintain and document compliance.
The solution provides a web search interface designed for business users to search a catalog of trusted, curated data, so they can quickly find the data they need and share it with colleagues. It simplifies data governance by delivering a truly scalable, automated and repeatable process for identifying sensitive data, capturing data lineage, and ensuring proper data use and access.
Users can build solutions that consume a variety of data from data lakes that serves both batch and stream sources of data. A user-friendly interface includes visually composeable blocks that allow users to ingest, correlate and analyze a variety of data. Low latency processing, combined with integration with other applications using FICO® Decision Management Platform, significantly reduces the time to build real-time streaming decision solutions. The solutions supports always-on, scheduled and transient processing, allowing users to run a variety of solutions using a single platform.