Relavance, The Only Patented Associative Technology
Associative Data Warehouse™©
A New Data Warehouse Model
For the first time, Relavance is bringing to market a technology that attains the ultimate goal in data management. It is the only truly associative atomic database model where each piece of information is atomic in nature and can be associated with any other piece of information. There are NO restrictions, NO constraints, NO rows, NO views, and NO cubes.
Unlike linear database models, associative databases are in three dimensions by default … and technically in (N) dimensions. The advantages are numerous and impressive:
- Single instance storage—no piece of data is ever duplicated
- No tables or indexing
- Automated data aggregation—use any number of sources
- Unprecedented security—with no name-space or storage structure binding, there is nothing to hack into!
Easily manage all compliance activities!
- Create and associate schedules and requirements
- Assign and manage users, deadlines and priorities
- Upload data sets, rules, processes, and workflows
- View current and historical details of all regulations
Greater peace of mind
If you are a big pharmaceutical or biotech company, a healthcare provider, a bank or research institution, any larger business with regulatory requirements, you understand the challenges keeping your organization in compliance with complex and changing regulations. Our compliance software provides an automated workflow customizable to align with your business:
- Interactively manages all compliance processes
- Integrates legal and regulatory impact assessments, policy management, surveys, and incidents
- Ensures accountability throughout organizations
Relavance compliance software is guaranteed to reduce the time and effort to keep your organization compliant. Now you can interactively manage compliance regulation with lower cost, less risk, and more peace of mind!
New Associative Intelligent Technologies™© as Enablers for Data Management Automation
Raw Disk - Hardware Storage Efficiency Optimization Methodologies (5-10x)
A methodology for directly mapping data onto a permanent storage system utilizing a deterministic algorithm that takes exactly 4 lookups to access any of 4 billion file / storage nodes, unlike the current methods using b-trees and modified b-trees that take an average of 24 lookups for half of 1 billion file nodes. It represents an efficiency and performance increase approaching a full order of magnitude over existing file storage systems, and does not suffer from the root node, single point of failure limitation they do. It self-optimizes as it goes, learning to fine-tune goal-oriented performance features.
Link Box - Inter-Processor Communication Efficiency Optimization Methodologies (5-10x)
A methodology for interconnecting sets of up to 64 processing nodes, each consisting of up to 64 cores, in a networked cluster enabling point to point communication utilizing existing NIC hardware with set-up times in 10 ‘s of micro-seconds instead of milliseconds as is typical today, without requiring specialized hardware such as that which is used in the very high-performance Infiniband and Myrinet products.
Network Memory System - ‘n’-Dimensional Informational Inter-Connect Optimization
An architecture and methodologies to enable the fully automated, intelligent distribution of datasets over a network of processing nodes, usually in the form of a networked server farm of actual or virtual machines. This works at a base network communications level as a self-optimizing, point to point data routing overlay.
Automated Concept / Model-Based Data Distribution (over ‘n’ processing nodes)
Built on the previous technologies, this is a methodology and architecture which enables automatic assimilation, segmentation and distribution of any dataset over a distributed set of processing nodes according to a run-time definable, high-level organizational model which can generically accommodate any / every dataset. The system intrinsically tracks where every piece of data gets distributed to and provides access to each piece of data without requiring any search query or map-reduce operations.
Virtual-PK-Based Data Organization (enabling cross-domain / cross-base poly-table querying)
A methodology for the generalization of all datasets through Correlation / Integration with a meta-architecture to enable automatic compatibility between every dataset thus allowing cross-dataset querying without the need of runtime joins or implementation of a data warehouse.
Self-Distributing Queries (over ‘n’ processing nodes)
A methodology based on run-time definable, high-level organizational models such that the resulting organization facilitates completely generic ad-hoc querying across any model complexity and any number of processing nodes, complete with auto-segmentation of queries.
Byte-Stream Factoring and Feature Identification and Extraction