.
FUNCTION dc_iniload ( cIniFile, cGroup, aArray, aReference )
LOCAL lExact := SET(_SET_EXACT), cValue, nFound, cParam,
xValue, cString, nHandle, aRef, i, lFound
cGroup := UPPER(IIF(Valtype(cGroup)=='C',cGroup,''))
aRef := ACLONE(aReference)
IF Left(cGroup,1)#'['
cGroup := '['+cGroup+']'
ENDIF
nHandle := DC_TXTOPEN( cIniFile )
SET(_SET_EXACT,.t.)
FOR i := 1 TO LEN(aRef)
aRef[i] := UPPER(aRef[i])
NEXT
lFound := .f.
IF nHandle > 0
DO WHILE !DC_TXTEOF(nHandle)
cString := ALLTRIM(DC_TXTLINE(nHandle))
DC_TXTSKIP(nHandle,1)
IF !EMPTY(cGroup) .AND. cGroup#UPPER(cString)
LOOP
ELSEIF cGroup==UPPER(cString)
cGroup := ''
ELSEIF Left(cString,1)=='[' .AND. EMPTY(cGroup)
EXIT
ENDIF
IF Left(cString,1)="*" .OR. Left(cString,1)='/'
LOOP
ELSEIF !("="$cString)
LOOP ENDIF
IF '//'$cString
cString := TRIM(SUBSTR(cString,1,AT('//',cString)-1))
ELSEIF '/*'$cString
cString := TRIM(SUBSTR(cString,1,AT('/*',cString)-1))
ENDIF
cParam := UPPER(Alltrim(SubStr(cString,1,AT('=',cString)-1)))
cValue := Alltrim(SubStr(cString,AT('=',cString)+1))
nFound := ASCAN( aRef, cParam )
IF nFound = 0
LOOP
ENDIF
xValue := aArray[nFound]
IF Valtype(xValue)='C'
xValue := cValue
ELSEIF Valtype(xValue)='D'
xValue := CTOD(cValue)
ELSEIF Valtype(xValue)='N'
xValue := VAL(cValue)
ELSEIF Valtype(xValue)='L'
xValue := cValue='YES'
ELSEIF Valtype(xValue)='B'
xValue := &(cValue)
ENDIF
aArray[nFound] := xValue
lFound := .t.
ENDDO
DC_TXTCLOSE( nHandle )
ENDIF
SET(_SET_EXACT,lExact)
RETURN lFound
/* ------------------------ */
FUNCTION dc_inisave ( cIniFile, cGroup, aArray, aReference )
LOCAL nHandle, cString, aIni := {}, i, j, k, cValue, xValue, nIni
SET(_SET_EXACT,.f.)
cGroup := UPPER(IIF(Valtype(cGroup)=='C',cGroup,''))
IF Left(cGroup,1)#'['
cGroup := '['+cGroup+']'
ENDIF
nHandle := DC_TXTOPEN( cIniFile )
IF Valtype(aArray)='C'
aArray := { aArray }
ENDIF
IF Valtype(aReference)='C'
aReference := {aReference}
ENDIF
nIni := 0
IF nHandle > 0
DO WHILE !DC_TXTEOF(nHandle)
cString := ALLTRIM(DC_TXTLINE(nHandle))
DC_TXTSKIP(nHandle,1)
IF Left(cString,1) = '['
AADD( aIni, { cString } )
nIni := LEN(aIni)
ELSEIF EMPTY(cString)
ELSEIF nIni > 0
AADD( aIni[nIni], cString )
ENDIF
ENDDO
ENDIF
DC_TXTCLOSE(nHandle)
FOR i := 1 TO LEN(aIni)
IF UPPER(aIni[i,1]) == cGroup
EXIT
ENDIF
NEXT
IF i > LEN(aIni)
AADD( aIni, { cGroup } )
ENDIF
FOR j := 1 TO LEN(aArray)
FOR k := 1 TO LEN(aIni[i])
IF UPPER(aIni[i,k])=UPPER(aReference[j])+'='
EXIT
ELSEIF UPPER(aIni[i,k])=UPPER(aReference[j])+' '
EXIT
ELSEIF EMPTY(aReference[j])
EXIT
ENDIF
NEXT
xValue := aArray[j]
cValue := ''
IF Valtype(xValue)='C'
cValue := xValue
ELSEIF Valtype(xValue)='N'
cValue := ALLTRIM(STR(xValue))
ELSEIF Valtype(xValue)='D'
cValue := DTOC(xValue)
ELSEIF Valtype(xValue)='L'
cValue := IIF(xValue,'Yes','No')
ENDIF
IF EMPTY(aReference[j])
ELSEIF k <= LEN(aIni[i])
aIni[i,k] := aReference[j]+'='+cValue
ELSE
AADD(aIni[i], aReference[j]+'='+cValue)
ENDIF
NEXT
nHandle := Fcreate( cIniFile )
FOR i := 1 TO LEN(aIni)
FWRITE( nHandle, CHR(13)+CHR(10) )
FOR j := 1 TO LEN(aIni[i])
FWRITE( nHandle, aIni[i,j] + CHR(13)+CHR(10) )
NEXT
NEXT
FCLOSE(nHandle)
RETURN .t.
Using Array Files for Meta-Data
Array files contain array data in a "binary" format. In an array-based architecture, this is the simplest and fastest method of restoring configuration arrays from disk. An array-file Data-Dictionary is basically a set of individual files on the disk, one for each resource object required by the application. A naming convention for array files is usually required to ensure that the required resource object is restored by the application. For example, Menu array files could be given a .DCM extension, Data-entry array files could be given a .DCE extension, Browse array files could be given a .DCB extension, etc. The file name would be the same as the "tag name" of the resource object requested by the program. For example, the function DC_MENULOAD("MAINMENU") would return an array from the contents of a file named MAINMENU.DCM and the function DC_EDITLOAD("CUSTOMER") would return a data-entry array from the contents of a file named CUSTOMER.DCE.
Here are two functions, DC_ASAVE() and DC_ARESTORE(), that can be used to support a system of array files for saving or restoring any multi-dimensional array.
Storing Arrays to Database fields as Token Strings
When an array needs to be stored to a memo field or a character field, a common technique is the "token" method. This is widely accepted because tokenized data is easier to maintain with database editing utilities in the event that the database becomes corrupted. Converting an array to tokens is rather simple but is usually limited to arrays whose elements are all character strings. A tokenized character string is simply a set of data that is separated by a common delimiter character. My personal preference, when storing tokenized data, is to use the vertical bar "|" delimeter because when editing databases, it becomes more obvious that the information in the field is tokenized and should not be tampered with other than by the resource editor.
Here are two functions, DC_ARRAYTOKEN() and DC_TOKENARRAY() that can be used for converting tokenized data.
FUNCTION dc_arraytoken( aTokens, cDelim )
LOCAL i, nLength := LEN(aTokens), cString := ""
FOR i := 1 TO nLength
cString += aTokens[i] + IIF(i 0
EXIT
ENDIF
NEXT
IF nFound > 0
AADD( aTokens, SubStr(cString,1,nFound-1) )
cString := SubStr(cString,nFound+1)
ELSE
AADD( aTokens, cString )
EXIT
ENDIF
ENDDO
RETURN aTokens
Storing Arrays to Database fields as Linked-List Strings
Most array-objects in a Data-Driven system are multi-dimensional and may be constructed of sub-arrays that are non-symmetrical, ragged-symmetrical, parallel-symmetrical, or combinations. They may also contain elements of mixed types such as character, numeric, date or logical. No matter how complicated the array structure, it can be converted to a string and converted back to the original array (with the exception of code-blocks). Arrays that are converted to strings will be referred to as "string-arrays". These strings can be stored to a character type field or a memo type field in the database. If the array being converted always produces a string-array that has a fairly predictable length, then it may be wiser to store the string-array to a fixed-length character field. If it produces a string-array that can vary greatly in length then it is suggested that it be stored to a variable length memo type field. Here are two functions, DC_AR2STR() and DC_STR2AR(), that can be used for converting arrays for storage to memo fields. NOTE: This technique has been tested extensively with various array types and memo field drivers and will work with all .DBT memos, .FPT memos, and .DBV memos supported by CA-Clipper, CA-Visual Objects and third-party drivers.
FUNCTION dc_ar2str ( aArray, lHeader )
LOCAL cArray := ''
lHeader := IIF(Valtype(lHeader)='L',lHeader,.f.)
IF lHeader
cArray := CHR(1)+'Array String:'
ENDIF
_dcStore( aArray, @cArray )
RETURN cArray
/* ----------------- */
FUNCTION dc_str2ar ( cString )
LOCAL nPosition := 1, cArray := cString
IF SubStr(cArray,1,14)==CHR(1)+'Array String:'
cArray := SubStr(cString,15)
ENDIF
RETURN _dcGet( @nPosition, @cArray )
/* ----------------- */
STATIC FUNCTION _dcStore( xThing, cArray )
LOCAL cItem
IF Valtype( xThing ) == "A"
_dcarray( xThing, @cArray )
ELSE
cItem := _dcitem( xThing )
IF Valtype( cItem ) = 'C'
cArray += cItem
ENDIF
ENDIF
RETURN nil
/* ----------------- */
STATIC FUNCTION _dcArray( aArray, cArray )
LOCAL i, cItem, cL2bin := l2bin(len(aArray))
IF CHR(26)$cL2bin
cArray += "O"+ DC_l2Dec(len(aArray))
ELSE
cArray += "A"+ cL2bin
ENDIF
FOR i := 1 TO Len(aArray)
cItem := _dcitem( aArray[i], @cArray )
IF Valtype( cItem ) = 'C'
cArray += cItem
ENDIF
NEXT i
RETURN nil
/* ----------------- */
STATIC FUNCTION _dcItem ( xItem, cArray )
LOCAL cRetVal, cType := Valtype( xItem ), cL2bin
DO CASE
CASE cType == "C"
cL2bin := l2bin( Len( xItem ))
/* -- memo fields can't store CHR(26) -- */
IF CHR(26)$cL2bin
cRetVal := "M"+DC_l2Dec( len( xItem)) + xItem
ELSE
cRetVal := "C"+cL2bin+xItem
ENDIF
CASE cType == "N"
IF '.'$STR(xItem)
xItem := STR(xItem)
cRetVal := "F"+l2Bin( len( xItem)) + xItem
ELSE
cL2bin := l2bin(xItem)
/* -- memo fields can't store CHR(26) -- */
IF CHR(26)$cL2bin
cRetVal := "W"+DC_l2Dec(xItem)
ELSE
cRetVal := "N"+l2bin(xItem)
ENDIF
ENDIF
CASE cType == "L"
cRetVal := "L"+IIF(xItem, "T", "F")
CASE cType == "U"
cRetVal := "U"
CASE cType == "D"
cRetVal := "D"+l2bin( xItem - ctod("01/01/70") )
CASE cType == "B"
cRetVal := "B"
OTHERWISE
_dcStore( xItem, @cArray )
ENDCASE
RETURN cRetVal
/* ----------------- */
STATIC FUNCTION _dcGet ( nPosition, cArray )
LOCAL nLength, i, cAttrib, cRetVal
cAttrib := substr( cArray, nPosition++, 1 )
DO CASE
CASE cAttrib $ 'CNADF'
nLength := bin2l( substr( cArray, nPosition, 4 ) )
nPosition += 4
DO CASE
CASE cAttrib == "C"
cRetVal := substr( cArray, nPosition, nLength )
nPosition += nLength
CASE cAttrib == "F"
cRetVal := VAL(substr( cArray, nPosition, nLength ))
nPosition += nLength
CASE cAttrib == "N"
cRetVal := nLength
CASE cAttrib == "A"
cRetVal := array( nLength )
FOR i := 1 TO nLength
cRetVal[i] := _dcget( @nPosition, @cArray )
NEXT i
CASE cAttrib == "D"
cRetVal := ctod("01/01/70")+nLength
ENDCASE
CASE cAttrib = 'M'
nLength := dc_dec2l( substr( cArray, nPosition, 12 ))
nPosition += 12
cRetVal := substr( cArray, nPosition, nLength )
nPosition += nLength
CASE cAttrib = 'W'
nLength := dc_dec2l( substr( cArray, nPosition, 12 ))
nPosition += 12
cRetVal := nLength
CASE cAttrib == "O"
nLength := dc_dec2l( substr( cArray, nPosition, 12 ))
nPosition += 12
cRetVal := Array( nLength )
FOR i := 1 TO nLength
cRetVal[i] := _dcget( @nPosition, @cArray )
NEXT i
CASE cAttrib = 'L'
cRetVal := if( substr(cArray, Position++,1) == "T", .t., .f.)
CASE cAttrib $ 'UB'
cRetVal := nil
ENDCASE
RETURN cRetVal
/* ------------------- */
STATIC FUNCTION dc_dec2l ( cNum )
RETURN VAL(Substr(cNum,1,3))*1 + ;
VAL(Substr(cNum,4,3))*256 + ;
VAL(Substr(cNum,7,3))*65536 + ;
VAL(Substr(cNum,10,3))*65536*65536
/* ------------------- */
STATIC FUNCTION dc_l2dec ( nNum )
LOCAL cVal := l2bin( nNum )
RETURN STRTRAN(STR(ASC(SubStr(cVal,1,1)),3) + ;
STR(ASC(SubStr(cVal,2,1)),3) + ;
STR(ASC(SubStr(cVal,3,1)),3) + ;
STR(ASC(SubStr(cVal,4,1)),3),' ','0')
Storing Arrays to Database fields as Blobs or Array-Fields
It is becoming more and more popular to utilize the special features of third-party data-drivers and the new CA-Clipper 5.3 data-drivers for storing arrays to database fields. Many of these drivers support a new class of memo field that allows storage of any type of data. Some will allow only storage of character strings and arrays. These fields are commonly referred to as "Array-Field memos". Others will allow storage of BLOBS or "binary-large-objects". Blob-storage systems can handle any kind of data from a text file to an .EXEcutable program. For example, a complete set of array files can be stored to a blob database and re-written to disk. The FlexFile data-driver can handle blobs for both CA-Clipper 5.2 and 5.3. The DBFBLOB driver that ships in the box with CA-Clipper 5.3 can also perform this function.
Storing Arrays to Code
Over the years, many application generator systems have been designed around a "passive" Data-Dictionary. The entire application is created by a set of design tools that maintains the Data-Dictionary and then writes out source code based on a "template" for each language supported by the application generator. The code that is generated must be compiled and linked into an executable program using the appropriate language compiler and libraries. These application generators are incapable of producing "user-modifiable" applications because the end application must always consist entirely of compiled code.
Writing the data-dictionary information to compiled code provides the advantage of much faster performance, less file-handles, better memory management, and the reliability of early-binding plus encapsulation of the entire application into an executable program. Unfortunately, most Data-Driven systems don't give the designer the option of committing desired portions of the Data-Dictionary to code and maintaining other portions of the application as user-definable.
Probably the greatest advantage of an Array-Based architecture is the simplicity of converting any portion of the application to executable code. Since the live application is driven by array objects, the process of running an application is simply a system of loading arrays with meta-data and passing the arrays to the appropriate sub-system for execution. Up to now we have talked about loading the arrays from files like databases and text files, which are older ideas, or array files, which is a newer idea. Another new idea in Data-Driven designs is the concept of linking the Data-Dictionary into the executable program. This is a very simple concept to implement in an Array-Based architecture because the process is merely the conversion of any multi-dimensional array to source code that, when compiled and executed, returns the contents of the original array.
For example, let's say you have designed an application that has been tested so well you haven't needed to modify the menu dictionary in months. You can decide to commit this dictionary, or portions of the dictionary to code by simply writing it out to a set of .PRG files, compiling the source to .OBJ files, then linking the .OBJ files into the executable program. Let's also say that the function you use to get a menu from the dictionary automatically tests to see if a function exists with the same name as the menu and the extension _M() before opening the menu dictionary. If it does, then it will call the function to return the menu array rather than extracting it from a file. Here's a little piece of code, based on functions in a Data-Driven API library that could be used to write out an entire menu dictionary to code:
aMenuNames := DC_MenuList()
FOR i := 1 TO LEN(aMenuNames)
aMenu := ;
DC_MenuLoad(aMenuName[i])
DC_Array2Prg( aMenu, ;
aMenuName[i], ;
aMenuName[i]+"_M()")
NEXT
Here's a function, DC_ARRAY2PRG(), that will convert any array to source code:
FUNCTION dc_array2prg ( aArray, cFileName, cFunction )
LOCAL nHandle, cSaveScrn, nChoice := 1, aOutArray := {"",""},;
nLevel := 2
cFileName := IIF(VALTYPE(cFileName)='C',cFileName,'')
IF !('.' $ cFileName )
cFileName += '.PRG'
ENDIF
nHandle := FCREATE( cFileName )
aOutArray[1] += 'FUNCTION ' + cFunction + CHR(13)+CHR(10)
aOutArray[1] += 'LOCAL a'+CHR(13)+CHR(10)
IF !EMPTY(cCode)
aOutArray[1] += cCode
ENDIF
aOutArray[1] += ;
'_dcaprg'+ALLTRIM(STR(nLevel))+'(@a)'+CHR(13)+CHR(10)
aOutArray[nLevel] += ;
'STATIC PROCEDURE _dcaprg'+ALLTRIM(STR(nLevel))+'(a)'+;
CHR(13)+CHR(10)
_dcar2prg ( aArray, nHandle, '', aOutArray, @nLevel )
aOutArray[nLevel] += 'RETURN' + CHR(13)+CHR(10)
aOutArray[1] += 'RETURN a'+REPL(CHR(13)+CHR(10),2)
FOR nLevel := 1 TO LEN(aOutArray)
FWRITE( nHandle, aOutArray[nLevel] )
NEXT
FCLOSE(nHandle)
RETURN nil
/* -------------------- */
STATIC PROCEDURE _dcar2prg ( aArray , nHandle, cElement, ;
aOutArray, nLevel )
LOCAL nElement, cTextLine, cType, nArrayLen, cValue, i, n, ;
nLength, cDelim, j, cChar, cNewText, cNewLine
nArrayLen := LEN(aArray)
aOutArray[nLevel] += 'a'+cElement+' := ARRAY('+;
ALLTRIM(STR(nArrayLen))+')'+CHR(13)+CHR(10)
FOR nElement := 1 TO nArrayLen
IF VALTYPE(aArray[nElement])='U'
LOOP
ENDIF
cValue := aArray[nElement]
cType := VALTYPE(cValue)
cTextLine := ''
DO CASE
CASE cType='C'
cDelim := '"'
IF LEN(cValue)=0
cTextLine := cDelim + cDelim
ELSE
FOR i := 1 TO LEN(cValue) STEP 100
cNewLine := SubStr(cValue,i,100)
IF CHR(13) $ cNewLine .OR. CHR(10) $ cNewLine .OR. ;
'"' $ cNewLine
cNewText := ''
FOR j := 1 TO LEN(cNewLine)
cChar := SubStr(cNewLine,j,1)
IF cChar=CHR(10)
cNewText += cDelim + '+CHR(10)+'+ cDelim
ELSEIF cChar=CHR(13)
cNewText += cDelim + '+CHR(13)+'+ cDelim
ELSEIF cChar='"'
cNewText += cDelim + '+CHR(34)+'+ cDelim
ELSE
cNewText += cChar
ENDIF
NEXT
cNewLine := cNewText
ENDIF
cTextLine += cDelim + cNewLine
IF i < LEN(cValue) .AND. LEN(cValue)>100
cTextLine += cDelim + '+;'+CHR(13)+CHR(10)
ELSE
cTextLine += cDelim
ENDIF
NEXT
IF RIGHT(cTextLine,1)=chr(10)
cTextLine += ' ' + cDelim + SUBSTR(cValue,i,100) + ;
cDelim
ENDIF
ENDIF
CASE cType='N'
cTextLine := ALLTRIM(STR(cValue))
CASE cType='D'
cTextLine := 'CTOD("'+DTOC(cValue)+'")'
CASE cType='L'
cTextLine := IIF(cValue,".T.",".F.")
CASE cType='A'
_dcar2prg( aArray[nElement], nHandle, ;
cElement+'['+ALLTRIM(STR(nElement,4))+']',aOutArray,@nLevel )
LOOP
OTHERWISE
LOOP
ENDCASE
aOutArray[nLevel] += ;
'a'+cElement+'['+ALLTRIM(STR(nElement))+'] := ' ;
+ cTextLine+CHR(13)+CHR(10)
IF LEN(aOutArray[nLevel]) > 10000
aOutArray[nLevel] += 'RETURN' + REPL(CHR(13)+CHR(10),2)
nLevel++
aOutArray[1] += ;
'_dcaprg'+ALLTRIM(STR(nLevel))+'(@a)'+CHR(13)+CHR(10)
AADD( aOutArray, 'STATIC PROCEDURE _dcaprg' + ;
ALLTRIM(STR(nLevel))+'(a)' + CHR(13)+CHR(10) )
ENDIF
NEXT
RETURN
Performance Issues
The number one concern on the mind of developers who are either considering Data-Driven systems or have experience with Data-Driven systems is the issue of performance. They worry that they will make a huge investment in a technology only to find out, too late, that it won't live up to performance expectations. Don't let your own past experiences or what you have heard from others decide the issue for you until you have test-driven some Data-Driven systems on the hardware you intend to use for your applications. Like I said earlier, the CA-Clipper language and today's computer systems just don't behave like systems you may have experienced in the past. In addition, Data-Driven systems architects have learned a lot about performance issues and have matured in their design approach to utilize techniques that weren't available or were unknown in the past. I have already discussed concepts that you can use in your code to fine-tune an application to perform optimally in the section titled Using an Array-Based Architecture, however I am going to cover this issue in some more detail and give you some more hints for speeding up your Data-Driven applications. Most of the performance hits on Data-Driven applications occur at the time the Meta-Data is being loaded into the resource arrays.
1. Use hard-code whenever possible. A well designed API library will use hard-coded sub-systems to actually run the resource and extract the custom information from a LOCAL array that was previous loaded from the Data-Dictionary. This type of design will cause the menu, data-entry, browse, screen, etc. resource to perform as well as if the entire system was hard-coded.
2. Use an array cache. An array cache is simply a static array that is used to store other arrays when they are requested to be loaded from the Data-Dictionary. Such a system should also be smart enough to manage the cache to give priority to arrays that are requested most often and not flush them from the cache when the cache memory limit is reached.
3. Compile the Array-Data into the executable program. See the section titled Storing Arrays to Code for more information.
4. Use workstation disk resources for Data-Dictionary files. In multi-user systems, the databases are stored on a server (along with the business data) to insure that every workstation is running the same application. The system can be designed with an option to automatically copy those Data-Dictionary files to the local hard-drive that have a newer date/time stamp. Accessing data from a local drive is much faster than accessing from a server.
5. Keep Data-Dictionary databases open. If the system has plenty of memory and is running in protected mode, then it is a good idea to keep all the Data-Dictionary files and indexes open during the running of the application. A system configuration flag can be set to enable this feature. This is especially important when using data-drivers like Advantage xBase Server due to it's slow file opening characteristics.
6. Use a 386 protected mode linker for CA-Clipper applications. It's a good idea to link the delivered executable with a 386 mode linker, like CauseWay, due to their improved performance over 286 mode linkers.
7. Provide plenty of system memory. Make lots of EMS available to real mode applications and lots of DPMI memory available to protected mode applications. The CA-Clipper virtual memory manager will start creating swap files when all the array memory is used up. This will greatly hamper performance.
Multi-User Issues
CA-Clipper and CA-Visual Objects are very good multi-user languages and their data-driver systems support the record locking and file locking required for multi-user applications. I will assume that everyone attending this seminar is already familiar with the RLOCK() and FLOCK() functions and that you always write code that opens databases in shared mode, lock records before updating, and commit changes to disk before unlocking the record. Following these simple rules will ensure that your Data-Driven applications will work in a multi-user environment.
In most data-entry systems, it is common to use scatter/gather techniques to prevent the need to keep a record locked during data entry. For example, the data is read into an array or a set of local memvars, the GETS and validations are performed on the temporary data, then the changes are written back to the record. This technique requires that the record be locked only for the few milliseconds required to REPLACE the data. Unfortunately, this technique can complicate the design of "validation" schemes. In a Data-Driven system, the validation system often includes expressions that are evaluated at the time the user completes entry into a field. Many of these validation routines return a logical value based on the data entered into the current field and other fields. When designing validation expressions, it is usually desired to use field names in the expressions, meaning that current data from the database will be used in determining if the data has been entered correctly. In a scatter/gather system, the data hasn't been written to the record yet so there is no way to validate based on a simple expression. Data-Dictionary architects have employed a variety of creative techniques to overcome this problem, however the simplest and probably the most common practice is to just forget the idea of scatter/gather and keep the record locked during the entire data-entry process for that record. This technique can simplify the design but it can also cause problems when a user walks away from his/her workstation or clicks on another window while in the middle of a data-entry process. If your system is designed this way, then consider putting a check in the key-handler idle loop for an inactivity time-out period. This is a common practice for invoking screen savers or automatic logoff routines. It can also include a check to see if the current record is locked and can unlock the record and restore the lock again when the user returns to the application.
Another issue that complicates multi-user applications is the "configuration" problem. Each user logs on from a different work-station and has different access rights. A Data-Driven system must be designed to handle user-customization such as user-defined colors, access-control, printer-drivers, modem ports for dialers, user-directories, etc. This is usually handled by a user dictionary database that contains one record for each user and an API library with a log-on function that requests a password and stores the user data into static arrays for access by the system. Most configuration issues are straight-forward and can be handled by a field in the user database for each configurable item. Some issues, like color-systems, however can complicate the problem especially if the system supports a lot of colors. If the user dictionary database contains one field for each color, then you can waste value symbol-table memory by a bunch of fields that are accessed only once, during startup. A better idea is to store the entire color array into a memo field in the user database using the DC_AR2STR() function. Another idea is to place configuration *.INI files or array files into a special directory for each user on the file server and have a field in the user dictionary database that points to this directory.
Data Integrity
A Data-Driven API library should be designed to utilize all the "common" features of the CA-Clipper or CA-Visual Objects replaceable data-driver system and to provide database, index, and referential integrity. For example, a File Dictionary would be used to define the database and index files to open as a related group. The dictionary system should also establish the RDD (replaceable data-driver) to use for the files. This provides the developer with the option of choosing database and index drivers for dBASE, FoxPro, Clipper, Paradox, etc. by simply linking the respective drivers into the executable program. The Field Dictionary would be used to define the fields in each database. It is the job of the API library to use the File and Field Dictionary meta-data when opening files and perform the following functions:
* Use the properly designated RDD for the work group being opened.
* Verify that the databases being opened match the field definitions in the field dictionary.
* Verify that the indexes being opened match the index key and tag name information in the file dictionary.
* Create new databases from the RDD and field information if the database doesn't exist.
* Create new indexes from the RDD and index information if the index doesn't exist or if it is corrupted.
* Establish all the Parent/Child relations for each work group.
* Establish the referential integrity rules for relational databases.
* Establish Key Business Rules, define Domains and establish database triggers to trigger events such as automatically deleting child records when a parent record is deleted.
Needless-to-say, performing all these tasks in a data-driven application (or any application) is not an easy programming job and takes time to develop a good set of functions and a methodology. It is impractical to list source code for this part of a Data-Driven application because it is a finely integrated process and cannot be described in simple terms. Any third-party Data-Driven library product that includes source code is the best source for this information.
Memory Management
Data-Driven systems are often notorious for using up all the memory resources of a computer system because of the amount of "air" or "dead-space" that exists in the databases and/or the arrays. Data-Driven, Array-Based systems may be wonderful in architecture, but they can consume too much memory if the application programmer gets carried away with a design that opens too many databases, instantiates too many data-entry or browse windows, or nests too many sub-system calls.
It's good to remember that an average Data-Driven system will usually consume twice as much memory as an average hard-coded system with the same functionality. Memory may be getting cheaper, but that doesn't necessarily mean that more of it is becoming available to the database application. End-users are demanding that their database applications reside in memory at the same time as their Windows applications yet they refuse to purchase the memory needed for such an environment.
Data-Driven systems should be designed to test the amount of memory available during the start of an application, then limit the amount of tasks that can be active at any time based on the memory available rather than allowing the system to crash with an "out of memory" error. Forcing garbage collection just before opening files and/or loading arrays will prevent memory fragmentation. This can be accomplished by using the FT_IDLE() function from the public-domain Nanforum Toolkit library. A system can be incorporated in the API library that forces Data-Dictionary files to be closed and array-caches to be flushed if available memory drops below a pre-established amount. The system should never access Data-Dictionary databases other than to load arrays to eliminate the need to keep the files open.
Event-driven systems are starting to dictate the expected behavior of modern database applications now that the Windows paradigm has become ingrained in our psyche. Unfortunately, this concept wreaks havoc with a computer's memory resources because each instantiation of a form or a window must allocate a large block of static memory. If no limits are placed on the user and he/she is allowed to freely open as many data-entry and browse windows as he/she wants, the system will eventually crash. When designing a data-driven system, the programmer should place restrictions on the number of instantiations allowed. In most non-event-driven systems, this memory-utilization limitation is usually automatic (provided that the arrays are LOCAL or PRIVATE), because the allocated memory is automatically released when the user exits from the task. Programmers are under much more pressure these days, however, to maintain data-entry and browse configurations in STATIC or PUBLIC arrays to allow the user to easily navigate between different work areas and data-entry screens. A system that is designed this way is wonderful for the user, but a nightmare for the programmer.
Databases use one symbol in the symbol-table for each database field. Applications with many databases and lots of fields can consume so much conventional memory that they can cause a "conventional memory exhausted" error unless this small memory pool is well-managed. If all fields in the application are pre-defined and declared to the compiler, then the fields are added to the symbol table at link time rather than at run time. This early-binding improves the memory characteristics of the program. It isn't usually practical to do this in Data-Driven applications because it tends to defeat the purpose of Data-Driven, however, it really is not at all difficult to write a function that will traverse the field dictionary file and write a .PRG file that contains nothing but FIELD declarations. The file can then be compiled and linked into the application engine to help resolve runtime memory problems.
Documentation, Modularity and Conventions
I have inherited many projects over the years, and have rarely picked up a project that included any documentation at all. Any Data-Driven system that lacks a good documentation system or does not provide a modular API-library can be very difficult to use or maintain. When developing a system, write the system as though it will be sold and supported as a library product rather than simply used for in-house projects. This will force the programmer(s) to establish high standards for modularity, documentation and maintainability and will greatly increase the probability of a successful project. Establish a system of programming conventions, then document and enforce the conventions among the programming team.
CONCLUSION
Developing and maintaining a Data-Driven system can be challenging and exciting. The project requires a skilled programmer with vision, committment and patience. If you possess these qualities, then you will be rewarded by your development efforts. Don't hesitate to spend a few hundred dollars looking at third-party Data-Driven products. Even if you decide to develop your own system, a third-party system will provide you with many good ideas and save you thousands of dollars in development costs.