
Can ChatGPT really help with WebReports development? A hands-on test
With generative AI applications like ChatGPT quickly becoming a go-to for everything from data analysis to planning a vacation, we were curious how it might be able to help with WebReports development. We tasked Andrew, one of our developers, with putting ChatGPT to work for us and investigating exactly what it might be capable of.
Ultimately, Andrew discovered that we’re not yet at the point where we can commission ChatGPT or other AI apps to create WebReports applications, but that AI can provide some useful enhancements to our development workflow. Here are a few examples.
We have a suite of about 40 add-on sub-tags that customers receive as part of the Ravenblack Productivity Suite. While we have existing PDF-based documentation, we also provide online help as part of our support package. Converting documentation written originally in Word/PDF documentation can be cumbersome and fiddly, so Andrew asked ChatGPT to create HTML versions of our documentation.
As expected, ChatGPT generally did a good job of this and used our specified templates and examples to create tables that matched our prescribed style; however, we also discovered an example of the real potential value, and the potential pitfalls of using this technology in the following example.
To set the scene, one of our new sub-tags is called RB_DECODE and is an extension of an existing sub-tag called DECODE, which converts data values to new values based on a simple string character match. We have added a variety of additional conditions for this sub-tag to allow detection of partial matches, numbers, and null values among other things. We provided ChatGPT with a table of all these functions, as well as a table of examples. In addition to converting these tables, ChatGPT helpfully (and unexpectedly) created an additional example for one of our tables, correctly noting that we had not included an example for one of our functions. Not only did it correctly deduce the syntax for the new example, but it even switched up our simple (candy bar) example from Skittles to Mars bar (a worthy change in my opinion). Bearing in mind that all the examples we had provided were based on the unique WebReports tag/sub-tag syntax, and we had introduced a completely unique amendment to the WebReports syntax, ChatGPT created this new example below from a very limited data set. I’ll admit that I was impressed!
ChatGPT syntax: [LL_REPTAG_'MARS BAR' RB_DECODE:@isNotInString[LL_REPTAG_'skittles snickers mars bar' /]:Fruit:Candy /] |
I’m guessing that the purpose of this syntax may not be immediately clear. You can read ChatGPT’s description below as it explains it better than I can! The logic here is a bit convoluted and is not an obvious usage of the syntax (which is why we didn’t include this example in the first place).
In addition to generating the example syntax, our example tables also included a column with the expected result. ChatGPT happily inserted the answer “Fruit” into the result column for its self-generated example. On reading through the syntax I realized that this answer appeared to be wrong; however, I was so dazzled by ChatGPT at this point that I actually plugged this syntax into a WebReport to verify that I was right and ChatGPT was wrong.
Essentially, I was already questioning my own judgment because of the general “confidence” exuded by ChatGPT.
At this point Andrew asked ChatGPT to clarify its reasoning.
The result of this query was a sound explanation that ultimately resulted in the correct answer. Andrew pointed out that this was a different answer than ChatGPT had originally given us and in response we got another excellent response (along with the correct answer) that is shown in the screen shot below.
While I’ve been working in computer science for decades now, I can’t help but be impressed with the complete “comprehension” of the WebReports syntax combined with a unique, Ravenblack extension. If I’m honest, I’m slightly jealous of how well ChatGPT explained this example.
In summary, this is all very impressive, but also a reminder that generative AI accepts a more variable data set and is always learning and adapting—very human qualities. Unfortunately, that can also result in variable outputs that (also in a very human way) can include errors. At this point we are using tools like ChatGPT as a smart, fast, prolific, eager-to-learn, sometimes overconfident, and ultimately fallible intern. Some oversight required.
In future blogs we will be describing how AI can be used safely with Ravenblack tools and APIs to accelerate and vastly improve your development productivity.
About the author
