Thursday, December 15, 2022

Intec Singl.eView Billing Fundamentals

Intec Singl.eView Billing Fundamentals: Singl.eview three tier Architecture: Tier1 • Tier 1 is the interface of the ’outside’ world with Convergent Billing. Clients, remote hosts, and external applications enter, inspect, modify data, and initiate processes in Convergent Billing. Tier2 • Tier 2 consists of Convergent Billing’s expression-driven configuration, the Transaction Engine (TRE), and non-TRE Convergent Billing processes. Tier 2 manages transactions initiated from, and returns results to, the first tier, as well as managing all database access. Supported platforms for Convergent Billing are AIX, Solaris, and HP-UX (PA-RISC and IA64). Tier3 • The database tier in Convergent Billing consists of the Oracle database, which stores business, administrative, and configuration data used in Convergent Billing. Singl.eView Configuration Layers: Configuration elements are: • Convergent Billing core software, consisting of key functionality for customer care, rating, billing, and workflow. • Common framework processes, interfaces, and subsystems, consisting of a set of predefined solutions for common configuration requirements. • Market-specific solutions and preconfigured product sets, consisting of configuration that is common to a specific set of products or solution. • Client-specific business rules, consisting of a collection of billing rules that the service provider configures to align customer care and billing processing to reflect their own business requirements. Rating and Billing Overview: Rating and billing is divided into seven operations: • Balance management, which provides event authorization, advice of charge information, and customer credit reservations. • Event normalization, in which input event and transaction data is converted into a standard format for storage in the Convergent Billing database. • Event rating, in which the normalized event records are aggregated and rated (costed) by applying the appropriate tariffs. • Event output, in which the costed event records are stored in the Convergent Billing database. • Billing, in which the costed event records are aggregated, and additional charges and discounts can be applied (for example, for recurring events). • Invoicing, in which billing data is combined with customer detail records and incorporated into an invoice or statement image. • Invoice output, in which invoice and statement images are printed or converted for electronic distribution. The seven operations are carried out by the following three major components: • The trerate TRE server, which authorizes real-time events, handles credit checking, and passes completed events on to the rating engine to be stored in the database. • The rating engine, which combines normalization and rating. The rating engine is a series of pipelined processes, which primarily communicate using shared memory. Rating input is raw call (or ’event’) records; output is database records containing normalized events and charges rated against the events (costed events). • The billing engine, which passes information back to the rating engine for the generation of recurring charges and adjustments, calculates billing charges, generates invoice data, and creates the invoice images for printing. Billing operations include the selection, sorting, and output of invoices or statements to printers or other output devices. TRE Overview and Functionality: The Transaction Engine (TRE) forms the core of the application tier of the open, three-tier architecture of Convergent Billing that is based on Tuxedo middleware. The TRE provides: • Transaction management • Cache management • Connection authentication • Multiple client access • Asynchronous alerts • Comprehensive set of business services and functions (exposed using APIs). Convergent Billing Interfaces : Convergent Billing is able to interface in multiple ways with multiple external systems, including the following: • Siebel • Oracle Financials • SAP • Clarify • PeopleSoft • Various network switches in batch and real-time modes. Convergent Billing Data Model : The Convergent Billing data model has the following points: • Entity relationships are determined dynamically through the configured business rules. • Historical information is maintained for all entities (through the use of date ranging). • Interpretation of data in generic table columns is dependent upon configuration. The main database entities in the Convergent Billing data model are: • Accounts • Contacts • Customers • Payments • Products • Queries. Customizing Convergent Billing : • Reference Types Reference types are defined lists of items, and are referenced throughout Convergent Billing; many of the drop-down lists available on the customer care forms are defined as reference types. Reference types are associated with attribute types to include drop-down lists in new and customized fields. • Attribute Types Attribute types are the major building blocks for entity validation and define the attributes of a field. It overrides the field’s existing attributes, allowing the field to be customised. • Entity Validation Entity validation is the key to extensively configuring the customer care environment. Entity validation is used to customize and add fields to customer care forms, and add validation. Both attribute types and reference types are used in the definition of entity validation. • Derived Attributes Derived attribute variables return one or more values based either on an expression or a lookup of a table defined within the derived attribute. Derived attribute variables and derived attribute tables allow service providers to capture and store their business rules in a tabular format. Derived attributes are variables derived from one or more other variables, by using: • Simple expressions • Conditional evaluation (similar to functions) • Lists (tables) • Table look-ups. Product Data Model : Product Model Entities Entities that comprise the product data model are described in the following sections. • Equipment A product may have one or more pieces of equipment. Equipment can be a physical piece of equipment (for example, a telephone) or conceptual (for example, a telephone number). Equipment can be reused for allocation to multiple services. • Service A product must have at least one service. For example, a wire line or wireless telephony product might offer one or more different service types including voice or fax. • Facility Groups A facility group is a collection of one or more service options (also referred to as facilities, features, supplementary services, or value-add services). The group is associated with a specified service and defined as part of a product. • Product Groups Products can be grouped to allow an operator to locate related products more easily when assigning products to customers. • Tariffs Tariffs specify the charges and benefits applied to events, and the rules for applying them, to calculate cost information for inclusion on an invoice or statement. Tariffs can be used to determine: • Whether to apply a charge • Amount of the charge • Allocation of the charge to the appropriate accounts and GL • codes. Charge Categories : • A charge category identifies the default account type and general ledger (GL) code to which a calculated tariff charge or benefit is allocated. For services, the charge category also identifies a specific account number. The association between a tariff and a charge category is specified when the tariff is defined, and is referred to as the tariff/charge category pair. • Charge categories allow guiding to a ’To’ account, ’From’ account, ’To’ GL code, and ’From’ GL code. The account type specified in a charge category definition is translated to the actual account number and stored in the charge category instance (which is created when the product instance is created). Batch Rating Engine : Steps Used by Rating Engine The basic steps of input, normalizing, rating, and output are used by the rating engine and outlined in the following steps. • Normalising Before Convergent Billing can rate events, incoming events must first be normalised. Normalising converts an event to the Convergent Billing native event format and is executed by the Event Normalisation (ENM) process. The normalisation process validates the record, verifies its accuracy, and formats it for rating. Convergent Billing provides a mapping language called DIL (Data Interface Language) that allows event data in any incoming format to be mapped and converted into Convergent Billing normalised events. DIL can handle the mapping of events in both ASCII and binary formats. • Rating During rating, chargeable elements are used by the Event Rating (ERT) process to determine the tariff to be applied and calculated to derive an event charge. Output • After charges are generated by the ERT, the Event Rating (ERO) process takes and outputs the charges and associated normalised events from the event cache. Rating Processes There are basically five billing components that execute the rating functions in Singl.eView. They are as below: • Event Rating Broker (BKR) process • Event Normalisation (ENM) process • Event Rating (ERT) process • Event Rating Output (ERO) process • TRE Rating Server (trerate).

Monday, December 12, 2022

Big Data Learn Step by Step

https://www.mltut.com/how-to-learn-big-data-step-by-step/

Friday, August 19, 2022

REST API

REpresentational State Transfer Application Programming Interface. refers to a group of software architecture design constraints that bring about efficient, reliable, and scalable systems. it is a data architecture and design methodology that produces predicatable and consistent behaviours and ouputs by receiving a set of standard methods called verbs and returning standarized structured data, typically JSON or XML, called the resource. API is a set of features and rules that exist inside of a software program enabling interaction between the software and other items, such as other software or hardware. In the context of REST APIs, The API is the collection of tools used to access and work with REST resources through your adverbs including get, pulls,put, and delete. URL vs URI vs URN ------------------ URL --> Actual physical location. URI vs URL --> All URLs are URIs and not All URIs are URLs. URN --> is a subset of URI. Unique name identifier , say a person. A URN can also be a URL, but doesn't have to be . So in conclusion a URL might also be a URN and both are URIs. The Six Constraints of REST --------------------------- 1) Client-Server Architecture : The client manages user interface concerns while the server manages data storage concerns. 2) Statelessness : No client context or information , aka "state" , can be stored on the server between requests. 3) Cacheability : All REST responses must be clearly marked as cacheable or not cacheable. 4) Layered System: The client cannot know , and shouldn't care, whether it's connected directly to the server or to an intermediary like a CDN or mirror. 5) Code on Demand: servers are allowed to transfer executable code like client side JavaScript and compiled components to clients. 6) Uniform Interface: 6.1 : Resource identification in requests: The URI request must specify what resource it is looking for and what format thte response should use. 6.2 : Resource manipulation through representations: Once a client has a representation of a resource, it can modify or delete the resource. 6.3 : Self-descriptive messages: A unform interface must issue self-descriptive messages. This goes to both sending and receiving REST data. Each representation must describe its own data format. 6.4 : Hypermedia as the engine of application state: Once a client has access to a REST service, it should be able to discover al available resources and methods through the hyperlinks provided. Humans are not directly interacting with the REST API, Communication with the REST API is handled by the client, which can be anything , really. A website, an app, even an internet of things device.

Friday, July 1, 2022

Python Script for Data Recon

from collections import OrderedDict as od import pandas as pd def diff_func(df_left, df_right, uid, labels=('Left', 'Right'), drop=[[],[]]): dict_df = {labels[0]: df_left, labels[1]: df_right} col_left = df_left.columns.tolist() col_right = df_right.columns.tolist() # There could be columns known to be different, hence allow user to pass this as a list to be dropped. if drop[0] or drop[1]: print ('{}: Ignoring columns {} in comparison.'.format(labels[0], ', '.join(drop[0]))) print ('{}: Ignoring columns {} in comparison.'.format(labels[1], ', '.join(drop[1]))) col_left = list(filter(lambda x: x not in drop[0], col_left)) col_right = list(filter(lambda x: x not in drop[1], col_right)) df_left = df_left[col_left] df_right = df_right[col_right] # Step 1 - Check if no. of columns are the same: len_lr = len(col_left), len(col_right) assert len_lr[0]==len_lr[1], \ 'Cannot compare frames with different number of columns: {}.'.format(len_lr) # Step 2a - Check if the set of column headers are the same # (order doesnt matter) assert set(col_left)==set(col_right), \ 'Left column headers are different from right column headers.' \ +'\n Left orphans: {}'.format(list(set(col_left)-set(col_right))) \ +'\n Right orphans: {}'.format(list(set(col_right)-set(col_left))) # Step 2b - Check if the column headers are in the same order if col_left != col_right: print ('[Note] Reordering right Dataframe...') df_right = df_right[col_left] # Step 3 - Check datatype are the same [Order is important] if all(df_left.dtypes == df_right.dtypes): print ('DataType check: Passed') else: print ('dtypes are not the same.') df_dtypes = pd.DataFrame({labels[0]:df_left.dtypes,labels[1]:df_right.dtypes,'Diff':(df_left.dtypes == df_right.dtypes)}) df_dtypes = df_dtypes[df_dtypes['Diff']==False][[labels[0],labels[1],'Diff']] print (df_dtypes) # Step 4 - Check for duplicate rows for key, df in dict_df.items(): if df.shape[0] != df.drop_duplicates().shape[0]: print(key + ': Duplicates exists, they will be dropped.') dict_df[key] = df.drop_duplicates() # Step 5 - Check for duplicate uids. if isinstance(uid, (str, list)): print ('Uniqueness check: {}'.format(uid)) for key, df in dict_df.items(): count_uid = df.shape[0] count_uid_unique = df[uid].drop_duplicates().shape[0] dp = [0,1][count_uid_unique == df.shape[0]] #<-- Round off to the nearest integer if it is 100% pct = round(100*count_uid_unique/df.shape[0], dp) print ('{}: {} out of {} are unique ({}%).'.format(key, count_uid_unique, count_uid, pct)) # Checks complete, begin merge. d_result = od() d_result[labels[0]] = df_left d_result[labels[1]] = df_right if all(df_left.eq(df_right).all()): print('Trival case: DataFrames are an exact match.') d_result['Merge'] = df_left.copy() else: df_merge = pd.merge(df_left, df_right, on=col_left, how='inner') if not df_merge.shape[0]: print('Trival case: Merged DataFrame is empty') d_result['Merge'] = df_merge if type(uid)==str: uid = [uid] if type(uid)==list: df_left_only = df_left.append(df_merge).reset_index(drop=True) df_left_only['Duplicated']=df_left_only.duplicated(keep=False) #keep=False, marks all duplicates as True df_left_only = df_left_only[~df_left_only['Duplicated']] df_right_only = df_right.append(df_merge).reset_index(drop=True) df_right_only['Duplicated']=df_right_only.duplicated(keep=False) df_right_only = df_right_only[~df_right_only['Duplicated']] label = '{} or {}'.format(*labels) df_lc = df_left_only.copy() df_lc[label] = labels[0] df_rc = df_right_only.copy() df_rc[label] = labels[1] df_c = df_lc.append(df_rc).reset_index(drop=True) df_c['Duplicated'] = df_c.duplicated(subset=uid, keep=False) df_c1 = df_c[df_c['Duplicated']] df_c1 = df_c1.drop('Duplicated', axis=1) cols = df_c1.columns.tolist() df_c1 = df_c1[[cols[-1]]+cols[:-1]] df_uc = df_c[~df_c['Duplicated']] df_uc_left = df_uc[df_uc[label]==labels[0]] df_uc_right = df_uc[df_uc[label]==labels[1]] d_result[labels[0]+'_only'] = df_uc_left.drop(['Duplicated', label], axis=1) d_result[labels[1]+'_only'] = df_uc_right.drop(['Duplicated', label], axis=1) d_result['Diff'] = df_c1.sort_values(uid).reset_index(drop=True) return d_result

Friday, June 24, 2022

Live example of how CTL file in Oracle can be tuned to enhance timing and there by faster data loads.

=============

mail one

------------

 > PFA ctl files tuned , Please check the same and let us know the performance

> difference before and after using the tuned CTLs.
>
> Please note down the time and no of records/rows we are loading to target
> for each script.
>
> Waiting for your feedback.
>
> Thanks and Regards
>
> Shiju

=============

mail two

------------


> Hi Shiju
>
> I loaded the data by using the control files that you are provided. That
> increased the performance.
> And the timings to load 5136324 records are
>
> Before modifying the code
> 11 mins
> After modifying the code (as suggested by you)
> 7mins
> Thank you very much for providing such tuning technique.
>
> Thanks and Regards


=============

mail three

------------

Thanks for the feedback.

What I have done is increase the "bind array" for SQL*Loader tool.

There are some more steps we need to follow to get well tuned result.

Initially we need to estimate the RBS requirement (Rollback Segment).
1) determine size of a single row.

2) determine size of a bind array

memory= No.of Rows * rowsize
bindArraysize=min(memory,bindsize)

3) estimate the size of RBS size required.

rollbacksegment size=1.3*bindarraysize.

When estimating the size of rollback segment size ,I recommend to add 30%
for overhead to the bindarray size.

Since SQL *Loader cant select the RBS  for its own, we need to
make offline other rollback segments offline that are small as per the
calculation we have made.

another tips are

1)drop the indexes of the table you are loading data, then recreate
the same after loading.
(please dont drop indexes associated with primary and unique key constraints).

2)Use larger redolog files.

3)fixed width input data file faster than csv file.

NB: Please perform all these steps only with the help of a DBA.

Thanks and Regards

Thursday, June 16, 2022

Select * from tab equivalent in SQL Server

 Oracle SQL:: Select * from tab;


MS SQL Server::  


select TABLE_CATALOG,TABLE_SCHEMA,TABLE_NAME,TABLE_TYPE
from INFORMATION_SCHEMA.TABLES order by TABLE_TYPE,TABLE_NAME

Tuesday, May 10, 2022

What is a Runbook?

 


A runbook is a step-by-step guide that describes all procedures and operations that must be considered or addressed during the cutover. 

A runbook contains all the required documentation for each member of the cutover team to complete their tasks, as well as timelines and detailed instructions so that everyone can work together effectively and efficiently. 

To guarantee that there are no problems throughout the cutover process, your deployment runbook will need to be more extensive the larger and more complicated your business' old systems are. Most IT networks today span several systems and include third-party applications, therefore they should all be included in the runbook to avoid or minimize downtime.

Sunday, April 24, 2022

Back to data migration

  Hi All, I am once again back to data migration, which is my most desired technical activity.


Will start with some notes about challenges w.r.t bad data / dirty data in a wide perspective.


We can broadly classify dirty data into


Inaccurate

Incomplete

Duplicate

Siloed

Ungoverned

Stale

Unshared

Inconsistent